LangSmith

Best Self Hosted Alternatives to LangSmith

A curated collection of the 3 best self hosted alternatives to LangSmith.

LangSmith is a SaaS observability and experimentation platform for LLM applications. It provides tracing, real-time monitoring and alerting, dataset and evaluation management, and experiment tracking to debug, test, and monitor prompts, chains, and agents in production.

Alternatives List

#1
Langfuse

Langfuse

Langfuse is an open-source LLM engineering platform for tracing, metrics, evaluations, datasets, and prompt management to debug and improve AI applications.

Langfuse screenshot

Langfuse is an open-source LLM engineering platform that helps teams develop, monitor, evaluate, and debug LLM-powered applications. It provides end-to-end visibility into LLM calls and related app logic (RAG, embeddings, agent steps), alongside tools to iterate on prompts and measure quality over time.

Key Features

  • End-to-end tracing and observability for LLM applications, including nested operations and user sessions
  • Metrics and analytics to monitor model behavior and application performance
  • Evaluation workflows, including LLM-as-a-judge, human labeling, user feedback, and custom eval pipelines via API/SDK
  • Prompt management with versioning and collaborative iteration
  • Datasets and dataset runs for benchmarks, regression testing, and continuous improvement
  • Playground for testing prompts and model configurations, connected to production traces
  • Broad integrations (e.g., OpenTelemetry, OpenAI SDK wrappers, LangChain, LlamaIndex, LiteLLM) and a comprehensive API with typed SDKs

Use Cases

  • Debugging production LLM apps by inspecting traces across retrieval, tools, and agent actions
  • Running systematic evaluations and regression tests on prompts and model changes
  • Building internal LLMOps workflows using Langfuse datasets, APIs, and metrics

Limitations and Considerations

  • Full value typically requires instrumentation (SDKs or integration hooks) in the LLM application

Langfuse combines observability with prompt and evaluation tooling to shorten the iteration loop for LLM applications. It fits teams that need both operational insight and a structured workflow for improving quality over time.

20.6kstars
2kforks
#2
Opik

Opik

Opik is an open-source platform to trace, evaluate, and monitor LLM apps, RAG pipelines, and agent workflows with automated evaluations and production dashboards.

Opik screenshot

Opik is an open-source platform for debugging, evaluating, and monitoring LLM applications, including RAG systems and agentic workflows. It provides end-to-end tracing, evaluation tooling, and dashboards to help teams improve quality from prototype to production.

Key Features

  • End-to-end tracing of LLM calls, spans, conversations, and agent activity
  • Evaluation workflows with datasets, experiments, and LLM-as-a-judge style metrics
  • Prompt playground for comparing prompts and model outputs
  • Production monitoring dashboards for feedback, usage, and performance trends
  • Online evaluation rules to detect issues in production
  • Guardrails capabilities to screen inputs/outputs and support safer AI behavior
  • SDKs and API for integrating tracing and evaluations into applications and pipelines

Use Cases

  • Debugging and optimizing RAG chatbots by tracing retrieval and generation steps
  • Regression testing LLM pipelines in CI using automated evaluation suites
  • Monitoring production LLM applications for quality, safety, and cost signals over time

Limitations and Considerations

  • Some advanced workflows (high-volume tracing, rules, guardrails) can require careful capacity planning and operational setup in production

Opik fits teams that need practical LLM observability plus repeatable evaluation to ship changes with confidence. It is suitable for both experimentation and production monitoring when paired with appropriate infrastructure and governance.

17.3kstars
1.3kforks
#3
Agenta

Agenta

Agenta is an open-source LLMOps platform with a prompt playground, prompt/version management, LLM evaluation, and production observability for LLM apps.

Agenta screenshot

Agenta is an open-source LLMOps platform for building and operating production-grade LLM applications. It centralizes prompt work, evaluation workflows, and runtime traces so teams can iterate safely and measure quality over time.

Key Features

  • Interactive prompt playground to compare prompts and models side-by-side on real test cases
  • Prompt and configuration versioning with environments/branching to control changes
  • Testset management (including CSV import and capturing production cases) for repeatable experiments
  • Automated and human evaluation workflows, including LLM-as-judge and custom evaluators
  • Production observability with tracing, latency/usage/cost tracking, and debugging via detailed traces
  • Open standards support for tracing via OpenTelemetry-compatible instrumentation
  • UI and API parity to support both expert-driven and engineering workflows

Use Cases

  • Prompt engineering and regression testing before shipping changes to production
  • Evaluating agents and RAG pipelines with automated metrics plus expert review
  • Debugging and monitoring production LLM apps to detect failures and performance regressions

Agenta fits teams that need a single source of truth for prompts, evaluations, and traces, combining experimentation and operational monitoring in one platform. It helps reduce trial-and-error iterations by making changes measurable and auditable.

3.7kstars
455forks

Why choose an open source alternative?

  • Data ownership: Keep your data on your own servers
  • No vendor lock-in: Freedom to switch or modify at any time
  • Cost savings: Reduce or eliminate subscription fees
  • Transparency: Audit the code and know exactly what's running