Perplexica
Privacy-focused AI answering engine with web search and citations
Perplexica is a privacy-focused AI answering engine designed to run on your own hardware. It combines web search results with local or hosted LLMs to generate natural-language answers with cited sources.
Key Features
- Web search integration powered by SearxNG to aggregate results from multiple engines
- Supports local models via Ollama and multiple cloud LLM providers via API configuration
- Answer generation with cited sources for traceability
- Multiple search modes (speed/balanced/quality) to trade off latency vs depth
- File uploads for document-based Q&A (such as PDFs, text files, and images)
- Image and video search alongside standard web results
- Domain-scoped search to focus results on specific websites
- Smart query suggestions and a local search history
- Built-in API for integrating search and answering into other applications
Use Cases
- Private, self-hosted alternative to Perplexity-style web answering for individuals or teams
- Research assistant that produces source-cited summaries from the open web
- Internal tool that combines uploaded documents with web search for faster troubleshooting
Limitations and Considerations
- Answer quality and latency depend heavily on the chosen model/provider and the availability/quality of web search results
- Some functionality requires external provider API keys when not using a local model
Perplexica is well-suited for users who want a Perplexity-like experience while keeping searches and data under their control. With SearxNG-based search, configurable LLM backends, and citations, it aims to balance privacy, usability, and answer reliability.
Categories:
Tags:
Tech Stack:
Similar Services

Open WebUI
Extensible, offline-capable web interface for LLM interactions
Feature-rich, self-hosted AI interface that integrates Ollama and OpenAI-compatible APIs, offers RAG, vector DB support, image tools, RBAC and observability.


AnythingLLM
All-in-one AI chat app with RAG, agents, and multi-model support
AnythingLLM is an all-in-one desktop and Docker app for chatting with documents using RAG, running AI agents, and connecting to local or hosted LLMs and vector databases.

LibreChat
Self-hosted multi-provider AI chat UI with agents and tools
LibreChat is a self-hosted AI chat platform that supports multiple LLM providers, custom endpoints, agents/tools, file and image chat, conversation search, and presets.


Netron
Visualizer for neural network and machine learning models
Netron is a model graph viewer for inspecting neural network and ML formats such as ONNX, TensorFlow Lite, PyTorch, Keras, Core ML, and more.

Khoj
Open-source personal AI for chat, semantic search and agents
Self-hostable personal AI 'second brain' for chat, semantic search, custom agents, automations and integration with local or cloud LLMs.

Activepieces
AI-first no-code workflow automation with extensible integrations
Open-source automation builder for creating workflows with webhooks, HTTP steps, code actions, and an extensible TypeScript-based integration framework with AI features.

Ollama
Docker
TypeScript
Node.js