TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentation.
<p><picture><img src="https://www.tensorzero.com/github-trending-badge.svg" alt="#1 Repository Of The Day"></picture></p>
TensorZero is an open-source stack for industrial-grade LLM applications:
Gateway: access every LLM provider through a unified API, built for performance (<1ms p99 latency)
Observability: store inferences and feedback in your database, available programmatically or in the UI
Optimization: collect metrics and human feedback to optimize prompts, models, and inference strategies
Evaluations: benchmark individual inferences or end-to-end workflows using heuristics, LLM judges, etc.
Experimentation: ship with confidence with built-in A/B testing, routing, fallbacks, retries, etc.
Take what you need, adopt incrementally, and complement with other tools.
<p align="center">
<b><a href="https://www.tensorzero.com/" target="_blank">Website</a></b>
·
<b><a href="https://www.tensorzero.com/docs" target="_blank">Docs</a></b>
·
<b><a href="https://www.x.com/tensorzero" target="_blank">Twitter</a></b>
·
<b><a href="https://www.tensorzero.com/slack" target="_blank">Slack</a></b>
·
<b><a href="https://www.tensorzero.com/discord" target="_blank">Discord</a></b>
<br>
<br>
<b><a href="https://www.tensorzero.com/docs/quickstart" target="_blank">Quick Start (5min)</a></b>
·
<b><a href="https://www.tensorzero.com/docs/gateway/deployment" target="_blank">Deployment Guide</a></b>
·
<b><a href="https://www.tensorzero.com/docs/gateway/api-reference" target="_blank">API Reference</a></b>
·
<b><a href="https://www.tensorzero.com/docs/gateway/deployment" target="_blank">Configuration Reference</a></b>
</p>
<table>
<tr>
<td width="30%" valign="top"><b>What is TensorZero?</b></td>
<td width="70%" valign="top">TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluations, and experimentation.</td>
</tr>
<tr>
<td width="30%" valign="top"><b>How is TensorZero different from other LLM frameworks?</b></td>
<td width="70%" valign="top">
1. TensorZero enables you to optimize complex LLM applications based on production metrics and human feedback.<br>
2. TensorZero supports the needs of industrial-grade LLM applications: low latency, high throughput, type safety, self-hosted, GitOps, customizability, etc.<br>
3. TensorZero unifies the entire LLMOps stack, creating compounding benefits. For example, LLM evaluations can be used for fine-tuning models alongside AI judges.
</td>
</tr>
<tr>
<td width="30%" valign="top"><b>Can I use TensorZero with ___?</b></td>
<td width="70%" valign="top">Yes. Every major programming language is supported. You can use TensorZero with our Python client, any OpenAI SDK or OpenAI-compatible client, or our HTTP API.</td>
</tr>
<tr>
<td width="30%" valign="top"><b>Is TensorZero production-ready?</b></td>
<td width="70%" valign="top">Yes. Here's a case study: <b><a href="https://www.tensorzero.com/blog/case-study-automating-code-changelogs-at-a-large-bank-with-llms">Automating Code Changelogs at a Large Bank with LLMs</a></b></td>
</tr>
<tr>
<td width="30%" valign="top"><b>How much does TensorZero cost?</b></td>
<td width="70%" valign="top">Nothing. TensorZero is 100% self-hosted and open-source. There are no paid features.</td>
</tr>
<tr>
<td width="30%" valign="top"><b>Who is building TensorZero?</b></td>
<td width="70%" valign="top">Our technical team includes a former Rust compiler maintainer, machine learning researchers (Stanford, CMU, Oxford, Columbia) with thousands of citations, and the chief product officer of a decacorn startup. We're backed by the same investors as leading open-source projects (e.g. ClickHouse, CockroachDB) and AI labs (e.g. OpenAI, Anthropic).</td>
</tr>
<tr>
<td width="30%" valign="top"><b>How do I get started?</b></td>
<td width="70%" valign="top">You can adopt TensorZero incrementally. Our <b><a href="https://www.tensorzero.com/docs/quickstart">Quick Start</a></b> goes from a vanilla OpenAI wrapper to a production-ready LLM application with observability and fine-tuning in just 5 minutes.</td>
</tr>
</table>
Features
🌐 LLM Gateway
Integrate with TensorZero once and access every major LLM provider.
Access every major LLM provider (API or self-hosted) through a single unified API
Infer with streaming, tool use, structured generation (JSON mode), batch, multimodal (VLMs), file inputs, caching, etc.
Define prompt templates and schemas to enforce a consistent, typed interface between your application and the LLMs
Satisfy extreme throughput and latency needs, thanks to Rust: <1ms p99 latency overhead at 10k+ QPS
Integrate using our Python client, any OpenAI SDK or OpenAI-compatible client, or our HTTP API (use any programming language)
Ensure high availability with routing, retries, fallbacks, load balancing, granular timeouts, etc.
Soon: embeddings; real-time voice
<table>
<tr></tr> <!-- flip highlight order -->
<tr>
<td width="50%" align="center" valign="middle"><b>Model Providers</b></td>
<td width="50%" align="center" valign="middle"><b>Features</b></td>
</tr>
<tr>
<td width="50%" align="left" valign="top">
<p>
The TensorZero Gateway natively supports:
</p>
<ul>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/anthropic">Anthropic</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/aws-bedrock">AWS Bedrock</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/aws-sagemaker">AWS SageMaker</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/azure">Azure OpenAI Service</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/deepseek">DeepSeek</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/fireworks">Fireworks</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/gcp-vertex-ai-anthropic">GCP Vertex AI Anthropic</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/gcp-vertex-ai-gemini">GCP Vertex AI Gemini</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/google-ai-studio-gemini">Google AI Studio (Gemini API)</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/hyperbolic">Hyperbolic</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/mistral">Mistral</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/openai">OpenAI</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/together">Together</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/vllm">vLLM</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/xai">xAI</a></b></li>
</ul>
<p>
<em>
Need something else?
Your provider is most likely supported because TensorZero integrates with <b><a href="https://www.tensorzero.com/docs/gateway/guides/providers/openai-compatible">any OpenAI-compatible API (e.g. Ollama)</a></b>.
</em>
</p>
</td>
<td width="50%" align="left" valign="top">
<p>
The TensorZero Gateway supports advanced features like:
</p>
<ul>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/retries-fallbacks">Retries & Fallbacks</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/inference-time-optimizations">Inference-Time Optimizations</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/prompt-templates-schemas">Prompt Templates & Schemas</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/experimentation/">Experimentation (A/B Testing)</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/configuration-reference">Configuration-as-Code (GitOps)</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/batch-inference">Batch Inference</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/multimodal-inference">Multimodal Inference (VLMs)</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/inference-caching">Inference Caching</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/metrics-feedback">Metrics & Feedback</a></b></li>
<li><b><a href="https://www.tensorzero.com/docs/gateway/guides/episodes">Multi-Step LLM Workflows (Episodes)</a></b></li>
<li><em>& a lot more...</em></li>
</ul>
<p>
The TensorZero Gateway is written in Rust 🦀 with <b>performance</b> in mind (<1ms p99 latency overhead @ 10k QPS).
See <b><a href="https://www.tensorzero.com/docs/gateway/benchmarks">Benchmarks</a></b>.<br>
</p>
<p>
You can run inference using the <b>TensorZero client</b> (recommended), the <b>OpenAI client</b>, or the <b>HTTP API</b>.
</p>
</td>
</tr>
</table>
<br>
<details open>
<summary><b>Usage: Python — TensorZero Client (Recommended)</b></summary>
You can access any provider using the TensorZero Python client.
pip install tensorzero
Optional: Set up the TensorZero configuration.
Run inference:
from tensorzero import TensorZeroGateway # or AsyncTensorZeroGateway
with TensorZeroGateway.build_embedded(clickhouse_url="...", config_file="...") as client:
response = client.inference(
model_name="openai::gpt-4o-mini",
# Try other providers easily: "anthropic::claude-3-7-sonnet-20250219"
input={
"messages": [
{
"role": "user",
"content": "Write a haiku about artificial intelligence.",
}
]
},
)
Zoom in to debug individual API calls, or zoom out to monitor metrics across models and prompts over time — all using the open-source TensorZero UI.
Store inferences and feedback (metrics, human edits, etc.) in your own database
Dive into individual inferences or high-level aggregate patterns using the TensorZero UI or programmatically
Build datasets for optimization, evaluations, and other workflows
Replay historical inferences with new prompts, models, inference strategies, etc.
Export OpenTelemetry (OTLP) traces to your favorite general-purpose observability tool
Soon: AI-assisted debugging and root cause analysis; AI-assisted data labeling
Optimize your prompts programmatically using research-driven optimization techniques.
<table>
<tr></tr> <!-- flip highlight order -->
<tr>
<td width="50%" align="center" valign="middle"><b><a href="https://www.tensorzero.com/docs/gateway/guides/inference-time-optimizations#best-of-n-sampling">MIPROv2</a></b></td>
<td width="50%" align="center" valign="middle"><b><a href="https://github.com/tensorzero/tensorzero/tree/main/examples/gsm8k-custom-recipe-dspy">DSPy Integration</a></b></td>
</tr>
<tr>
<td width="50%" align="center" valign="middle"><img src="https://github.com/user-attachments/assets/d81a7c37-382f-4c46-840f-e6c2593301db" alt="MIPROv2 diagram"></td>
<td width="50%" align="center" valign="middle">
TensorZero comes with several optimization recipes, but you can also easily create your own.
This example shows how to optimize a TensorZero function using an arbitrary tool — here, DSPy, a popular library for automated prompt engineering.
</td>
</tr>
</table>
More coming soon...
<br>
📊 LLM Evaluations
Compare prompts, models, and inference strategies using TensorZero Evaluations — with support for heuristics and LLM judges.
Evaluate individual inferences with static evaluations powered by heuristics or LLM judges (≈ unit tests for LLMs)
Evaluate end-to-end workflows with dynamic evaluations with complete flexibility (≈ integration tests for LLMs)
Optimize LLM judges just like any other TensorZero function to align them to human preferences
Soon: more built-in evaluators; headless evaluations
Ship with confidence with built-in A/B testing, routing, fallbacks, retries, etc.
Ship with confidence with built-in A/B testing for models, prompts, providers, hyperparameters, etc.
Enforce principled experiments (RCTs) in complex workflows, including multi-turn and compound LLM systems
Soon: multi-armed bandits; AI-managed experiments
& more!
Build with an open-source stack well-suited for prototypes but designed from the ground up to support the most complex LLM applications and deployments.
Build simple applications or massive deployments with GitOps-friendly orchestration
Extend TensorZero with built-in escape hatches, programmatic-first usage, direct database access, and more
Integrate with third-party tools: specialized observability and evaluations, model providers, agent orchestration frameworks, etc.
Soon: UI playground
Demo
Watch LLMs get better at data extraction in real-time with TensorZero!
Dynamic in-context learning (DICL) is a powerful inference-time optimization available out of the box with TensorZero. It enhances LLM performance by automatically incorporating relevant historical examples into the prompt, without the need for model fine-tuning.
This example shows how to use TensorZero to optimize a data extraction pipeline. We demonstrate techniques like fine-tuning and dynamic in-context learning (DICL). In the end, an optimized GPT-4o Mini model outperforms GPT-4o on this task — at a fraction of the cost and latency — using a small amount of training data.
This example shows how to build a multi-hop retrieval agent using TensorZero. The agent iteratively searches Wikipedia to gather information, and decides when it has enough context to answer a complex question.
This example fine-tunes GPT-4o Mini to generate haikus tailored to a specific taste. You'll see TensorZero's "data flywheel in a box" in action: better variants leads to better data, and better data leads to better variants. You'll see progress by fine-tuning the LLM multiple times.
This example showcases how best-of-N sampling can significantly enhance an LLM's chess-playing abilities by selecting the most promising moves from multiple generated options.
TensorZero provides a number of pre-built optimization recipes covering common LLM engineering workflows. But you can also easily create your own recipes and workflows! This example shows how to optimize a TensorZero function using an arbitrary tool — here, DSPy.