Self-hosted projects tagged “Ollama”
9 open source projects with this tag
9 open source projects with this tag
9 services found

Extensible, offline-capable web interface for LLM interactions
Feature-rich, self-hosted AI interface that integrates Ollama and OpenAI-compatible APIs, offers RAG, vector DB support, image tools, RBAC and observability.

OpenAI-compatible local AI inference server and API
Run LLMs, image, and audio models locally with an OpenAI-compatible API, optional GPU acceleration, and a built-in web UI for managing and testing models.

Link shortener with analytics running on Cloudflare
Sink is a fast, secure link shortener with built-in analytics, custom slugs, link expiration, and serverless deployment on Cloudflare Workers or Pages.
AI extension for Paperless‑ngx providing automated analysis and RAG
Extension for Paperless‑ngx that uses OpenAI-compatible backends and Ollama to auto-classify, tag, index, and enable RAG-powered document chat and semantic search.


Offline file organizer and document manager with tagging
Offline-first file manager and personal knowledge workspace that organizes local files with tags, fast search, previews, and optional local AI features.
Self-hosted private AI chat and document Q&A with local inference
Self-hosted private AI tools for chat and document Q&A, supporting local Ollama inference or OpenAI-compatible APIs, with built-in authentication and user management.


Local AI video indexing and semantic search web app
Self-hosted web app that indexes videos with AI (transcription, vision analysis, embeddings) to enable natural-language search and scene export.
Frontend for LibreTranslate and LanguageTool with optional LLM integration
Web frontend that combines LibreTranslate translation and LanguageTool grammar checks, with optional Ollama LLM features for AI insights, file upload, and downloads.

Self-hosted recipe manager built with SvelteKit
Self-hosted recipe manager and PWA with web scraping, ingredient parsing, unit conversion, shopping lists, cooking logs, LLM-assisted features, and Docker deployment.