Onyx Community Edition
Self-hosted AI chat and enterprise search for any LLM

Onyx Community Edition is an open-source, self-hostable AI platform that combines a team chat UI with enterprise search and retrieval-augmented generation (RAG). It is designed to work with a wide range of LLM providers as well as locally hosted models, including deployments in airgapped environments.
Key Features
- AI chat interface designed to work with multiple LLM providers and self-hosted LLMs
- RAG with hybrid retrieval and contextual grounding over ingested and uploaded content
- Connectors to many external knowledge sources with metadata ingestion
- Custom agents with configurable instructions, knowledge, and actions
- Web search integration and deep-research style multi-step querying
- Collaboration features such as chat sharing, feedback collection, and user management
- Enterprise-oriented access controls including RBAC and support for SSO (depending on configuration)
Use Cases
- Company-wide AI assistant grounded in internal documents and connected tools
- Knowledge discovery and enterprise search across large document collections
- Building task-focused AI agents that can retrieve context and trigger actions
Limitations and Considerations
- Some advanced organization-focused capabilities may differ between Community and Enterprise editions
- Retrieval quality and permissions mirroring depend on connector availability and configuration
Onyx CE is a strong fit for teams that want an extensible, transparent AI assistant and search layer over internal knowledge. It emphasizes configurable retrieval, integrations, and deployability across diverse infrastructure setups.
Categories:
Tags:
Tech Stack:
Similar Services

Open WebUI
Extensible, offline-capable web interface for LLM interactions
Feature-rich, self-hosted AI interface that integrates Ollama and OpenAI-compatible APIs, offers RAG, vector DB support, image tools, RBAC and observability.


AnythingLLM
All-in-one AI chat app with RAG, agents, and multi-model support
AnythingLLM is an all-in-one desktop and Docker app for chatting with documents using RAG, running AI agents, and connecting to local or hosted LLMs and vector databases.

LibreChat
Self-hosted multi-provider AI chat UI with agents and tools
LibreChat is a self-hosted AI chat platform that supports multiple LLM providers, custom endpoints, agents/tools, file and image chat, conversation search, and presets.


Netron
Visualizer for neural network and machine learning models
Netron is a model graph viewer for inspecting neural network and ML formats such as ONNX, TensorFlow Lite, PyTorch, Keras, Core ML, and more.

Khoj
Open-source personal AI for chat, semantic search and agents
Self-hostable personal AI 'second brain' for chat, semantic search, custom agents, automations and integration with local or cloud LLMs.
Perplexica
Privacy-focused AI answering engine with web search and citations
Self-hosted AI answering engine that combines web search with local or hosted LLMs to generate cited answers, with search history and file uploads.
Docker
TypeScript
Python