Lingarr
Subtitle translation app with local and SaaS translation services.
Lingarr is a subtitle translation application that combines multiple translation services to automatically translate subtitle files to a target language. It supports LibreTranslate, local AI, and cloud providers like DeepL, Anthropic, OpenAI, and more, and can run as a local or SaaS deployment with a RESTful API for integration. (github.com)
Key Features
- Supports multiple translation services including LibreTranslate, Local AI, DeepL, Anthropic, OpenAI, DeepSeek, Gemini, Google, Bing, Yandex and Azure. (github.com)
- Local and SaaS deployment options with Docker Compose and Docker runtime support. (github.com)
- Multi-architecture Docker images (amd64 and arm64) with deployment flexibility. (github.com)
- Exposes a RESTful API for integration into other apps. (github.com)
- Explicit setup guidance for running Lingarr via Docker and Docker Compose. (github.com)
Use Cases
- Self-hosted media libraries: deploy Lingarr with Docker Compose to automatically translate and apply subtitles for movies and TV shows. (github.com)
- Flexible translation workflows: choose from LibreTranslate, OpenAI, DeepL, or other providers to match cost, latency, and quality needs. (github.com)
- API-driven integrations: use the RESTful API to incorporate Lingarr subtitle translation into custom media workflows or apps. (github.com)
Limitations and Considerations
- Lingarr supports a variety of translation providers, which may have different terms of service, pricing, and data handling. (Implemented via the multi-provider approach described in the README.) (github.com)
Conclusion
Lingarr provides a flexible, multi-provider subtitle translation solution that can run locally or as a SaaS service, with a RESTful API for programmatic control. Its Docker-based deployment and broad provider support make it adaptable to diverse translation needs in media workflows. (github.com)
Categories:
Tags:
Tech Stack:
Similar Services

Open WebUI
Extensible, offline-capable web interface for LLM interactions
Feature-rich, self-hosted AI interface that integrates Ollama and OpenAI-compatible APIs, offers RAG, vector DB support, image tools, RBAC and observability.


AnythingLLM
All-in-one AI chat app with RAG, agents, and multi-model support
AnythingLLM is an all-in-one desktop and Docker app for chatting with documents using RAG, running AI agents, and connecting to local or hosted LLMs and vector databases.

LibreChat
Self-hosted multi-provider AI chat UI with agents and tools
LibreChat is a self-hosted AI chat platform that supports multiple LLM providers, custom endpoints, agents/tools, file and image chat, conversation search, and presets.


Netron
Visualizer for neural network and machine learning models
Netron is a model graph viewer for inspecting neural network and ML formats such as ONNX, TensorFlow Lite, PyTorch, Keras, Core ML, and more.

Khoj
Open-source personal AI for chat, semantic search and agents
Self-hostable personal AI 'second brain' for chat, semantic search, custom agents, automations and integration with local or cloud LLMs.
Perplexica
Privacy-focused AI answering engine with web search and citations
Self-hosted AI answering engine that combines web search with local or hosted LLMs to generate cited answers, with search history and file uploads.
Docker