OmniPoly
Frontend for LibreTranslate and LanguageTool with optional LLM integration
OmniPoly is a web frontend that centralizes translation and language-enhancement workflows. It integrates translation and grammar-checking backends and can optionally use LLMs to provide AI-driven insights and sentence extraction.
Key Features
- Text translation across multiple languages using an external translation backend
- Grammar and style checking via an external LanguageTool instance
- Optional LLM-powered analysis (Ollama) for sentiment, interesting-sentence extraction, and text modification
- File upload for batch translation and download of translated results
- Add words to a user dictionary for grammar checks and configurable language filters
- Docker and Docker Compose support with environment-variable configuration; UI hides features when their backend is not configured
Use Cases
- Translate and proofread emails, documents, or short texts while preserving grammar and style
- Extract and highlight interesting sentences or sentiment from translated text using an LLM
- Build a lightweight internal tool to unify translation, grammar checking, and simple AI text transformations
Limitations and Considerations
- OmniPoly is a frontend and requires external instances of LibreTranslate, LanguageTool, and/or Ollama to enable corresponding features
- Some features are only available when respective backend services and API keys are provided via environment variables
OmniPoly is suited for teams or individuals who want a single interface for translation and language quality workflows and who can provide the required backend services. It emphasizes configurable integrations and simple deployment via Docker.
Categories:
Tags:
Tech Stack:
Similar Services

Open WebUI
Extensible, offline-capable web interface for LLM interactions
Feature-rich, self-hosted AI interface that integrates Ollama and OpenAI-compatible APIs, offers RAG, vector DB support, image tools, RBAC and observability.


AnythingLLM
All-in-one AI chat app with RAG, agents, and multi-model support
AnythingLLM is an all-in-one desktop and Docker app for chatting with documents using RAG, running AI agents, and connecting to local or hosted LLMs and vector databases.

LibreChat
Self-hosted multi-provider AI chat UI with agents and tools
LibreChat is a self-hosted AI chat platform that supports multiple LLM providers, custom endpoints, agents/tools, file and image chat, conversation search, and presets.


Netron
Visualizer for neural network and machine learning models
Netron is a model graph viewer for inspecting neural network and ML formats such as ONNX, TensorFlow Lite, PyTorch, Keras, Core ML, and more.

Khoj
Open-source personal AI for chat, semantic search and agents
Self-hostable personal AI 'second brain' for chat, semantic search, custom agents, automations and integration with local or cloud LLMs.
Perplexica
Privacy-focused AI answering engine with web search and citations
Self-hosted AI answering engine that combines web search with local or hosted LLMs to generate cited answers, with search history and file uploads.
Ollama
HTML
Docker
TypeScript
CSS
Node.js