Skip to main content

Agenta Core is Now Open Source

We're open sourcing the core of Agenta under the MIT license. All functional features are now available to the community.

What's Open Source

Every feature you need to build, test, and deploy LLM applications is now open source. This includes the evaluation system, prompt playground and management, observability, and all core workflows.

You can run evaluations using LLM-as-a-Judge, custom code evaluators, or any built-in evaluator. Create and manage test sets. Evaluate end-to-end workflows or specific spans in traces.

Experiment with prompts in the playground. Version and commit changes. Deploy to environments. Fetch configurations programmatically.

Trace your LLM applications with OpenTelemetry support. View detailed execution traces. Monitor costs and performance. Filter and search traces.

Building in Public Again

We've moved development back to the public repository. You can see what we're building, contribute features, and shape the product direction.

What Remains Under Commercial License

Only enterprise collaboration features stay under a separate license. This includes role-based access control (RBAC), single sign-on (SSO), and audit logs. These features support teams with specific compliance and security requirements.

Get Started

Follow the self-hosting quick start guide to deploy Agenta on your infrastructure. View the source code and contribute on GitHub. Read why we made this decision at agenta.ai/blog/commercial-open-source-is-hard-our-journey.

What This Means for You

You can run Agenta on your infrastructure with full access to evaluation, prompting, and observability features. You can modify the code to fit your needs. You can contribute back to the project.

The MIT license gives you freedom to use, modify, and distribute Agenta. We believe open source creates better products through community collaboration.

Evaluation SDK

The Evaluation SDK lets you run evaluations programmatically from code. You get full control over test data and evaluation logic. You can evaluate agents built with any framework and view results in the Agenta dashboard.

Why Programmatic Evaluation?

Complex AI agents need evaluation that goes beyond UI-based testing. The Evaluation SDK provides code-level control over test data and evaluation logic. You can test agents built with any framework. Run evaluations in your CI/CD pipeline. Debug complex workflows with full trace visibility.

Key Capabilities

Test Data Management

Create test sets directly in your code or fetch existing ones from Agenta. Test sets can include ground truth data for reference-based evaluation or work without it for evaluators that only need the output.

Built-in Evaluators

The SDK includes LLM-as-a-Judge, semantic similarity, and regex matching evaluators. You can also write custom Python evaluators for your specific requirements.

Reusable Configurations

Save evaluator configurations in Agenta to reuse them across runs. Configure an evaluator once, then reference it in multiple evaluations.

Span-Level Evaluation

Evaluate your agent end to end or test specific spans in the execution trace. Test individual components like retrieval steps or tool calls separately.

Run on Your Infrastructure

Evaluations run on your infrastructure. Results appear in the Agenta dashboard with full traces and comparison views.

Getting Started

Install the SDK:

pip install agenta

Here's a minimal example evaluating a simple agent:

import agenta as ag
from agenta.sdk.evaluations import aevaluate

# Initialize
ag.init()

# Define your application
@ag.application(slug="my_agent")
async def my_agent(question: str):
# Your agent logic here
return answer

# Define an evaluator
@ag.evaluator(slug="correctness_check")
async def correctness_check(expected: str, outputs: str):
return {
"score": 1.0 if outputs == expected else 0.0,
"success": outputs == expected,
}

# Create test data
testset = await ag.testsets.acreate(
name="Agent Tests",
data=[
{"question": "What is 2+2?", "expected": "4"},
{"question": "What is the capital of France?", "expected": "Paris"},
],
)

# Run evaluation
result = await aevaluate(
name="Agent Correctness Test",
testsets=[testset.id],
applications=[my_agent],
evaluators=[correctness_check],
)

print(f"View results: {result['dashboard_url']}")

Dashboard Integration

Every evaluation run gets a shareable dashboard link. The dashboard shows full execution traces, comparison views for different versions, aggregated metrics, and individual test case details.

Next Steps

Check out the Quick Start Guide to build your first evaluation.

Online Evaluation

Online Evaluation automatically evaluates every request to your LLM application in production. Catch quality issues like hallucinations and off-brand responses as they happen.

How It Works

Online Evaluation runs evaluators on your production traces automatically. Monitor quality in real time instead of discovering issues through user complaints.

Key Features

Automatic Evaluation

Every request to your application gets evaluated automatically. The system runs your configured evaluators on each trace as it arrives.

Evaluator Configuration

Configure evaluators like LLM-as-a-Judge with custom prompts tailored to your quality criteria. Use any evaluator that works in regular evaluations.

Span-Level Evaluation

Create online evaluations with filters for specific spans in your traces. Evaluate just the retrieval step in your RAG pipeline or focus on specific tool calls in your agent.

Sampling Control

Set sampling rates to control costs. Evaluate every request during testing, then sample a percentage in production to balance quality monitoring with budget.

Filtering and Analysis

View all evaluated requests in one place. Filter traces by evaluation scores to find problematic cases. Jump into detailed traces to understand what went wrong.

Build Better Test Sets

Add problematic cases directly to your test sets. Turn production failures into regression tests.

Setup

Setting up online evaluation takes a few minutes:

  1. Navigate to the Online Evaluation section
  2. Select the evaluators you want to run
  3. Configure sampling rates and span filters if needed
  4. Enable the online evaluation

Your application traces will be automatically evaluated as they arrive.

Use Cases

Catch hallucinations by running fact-checking evaluators on every response. Monitor brand compliance using LLM-as-a-Judge evaluators with custom prompts. Track RAG quality by evaluating retrieval in real time. Monitor agent reliability by checking tool calls and reasoning steps. Build better test sets by capturing edge cases from production.

Next Steps

Learn about configuring evaluators for your quality criteria.

Customize LLM-as-a-Judge Output Schemas

The LLM-as-a-Judge evaluator now supports custom output schemas. You can define exactly what feedback structure you need for your evaluations.

What's New

Flexible Output Types

Configure the evaluator to return different types of outputs:

  • Binary: Return a simple yes/no or pass/fail score
  • Multiclass: Choose from multiple predefined categories
  • Custom JSON: Define any structure that fits your use case

Include Reasoning for Better Quality

Enable the reasoning option to have the LLM explain its evaluation. This improves prediction quality because the model thinks through its assessment before providing a score.

When you include reasoning, the evaluator returns both the score and a detailed explanation of how it arrived at that judgment.

Advanced: Raw JSON Schema

For complete control, provide a raw JSON schema. The evaluator will return responses that match your exact structure.

This lets you capture multiple scores, categorical labels, confidence levels, and custom fields in a single evaluation pass. You can structure the output however your workflow requires.

Use Custom Schemas in Evaluation

Once configured, your custom schemas work seamlessly in the evaluation workflow. The results display in the evaluation dashboard with all your custom fields visible.

This makes it easy to analyze multiple dimensions of quality in a single evaluation run.

Example Use Cases

Binary Score with Reasoning: Return a simple correct/incorrect judgment along with an explanation of why the output succeeded or failed.

Multi-dimensional Feedback: Capture separate scores for accuracy, relevance, completeness, and tone in one evaluation. Include reasoning for each dimension.

Structured Classification: Return categorical labels (excellent/good/fair/poor) along with specific issues found and suggestions for improvement.

Getting Started

To use custom output schemas with LLM-as-a-Judge:

  1. Open the evaluator configuration
  2. Select your desired output type (binary, multiclass, or custom)
  3. Enable reasoning if you want explanations
  4. For advanced use, provide your JSON schema
  5. Run your evaluation

Learn more in the LLM-as-a-Judge documentation.

Documentation Architecture Overhaul

We've completely rewritten and restructured our documentation with a new architecture. This is one of the largest updates we've made to the documentation, involving a near-complete rewrite of existing content and adding substantial new material.

Diataxis Framework Implementation

We've reorganized all documentation using the Diataxis framework.

Expanded Observability Documentation

One of the biggest gaps in our previous documentation was observability. We've added comprehensive documentation covering:

JavaScript/TypeScript Support

Documentation now includes JavaScript and TypeScript examples alongside Python wherever applicable. This makes it easier for JavaScript developers to integrate Agenta into their applications.

Ask AI Feature

We've added a new "Ask AI" feature that lets you ask questions directly to the documentation. Get instant answers to your questions without searching through pages.


Vertex AI Provider Support

We've added support for Google Cloud's Vertex AI platform. You can now use Gemini models and other Vertex AI partner models directly in Agenta.

What's New

Vertex AI is now available as a provider across the platform:

  • Playground: Configure and test Gemini models and other Vertex AI models
  • Model Hub: Add your Vertex AI credentials and manage available models
  • Gateway: Access Vertex AI models through the InVoke endpoints

You can use any model available through Vertex AI, including:

  • Gemini models: Google's most capable AI models (gemini-2.5-pro, gemini-2.5-flash, etc.)
  • Partner models: Claude, Llama, Mistral, and other models available through Vertex AI Model Garden

Configuration

To get started with Vertex AI, go to Settings → Model Hub and add your Vertex AI credentials:

  • Vertex Project: Your Google Cloud project ID
  • Vertex Location: The region for your models (e.g., us-central1, europe-west4)
  • Vertex Credentials: Your service account key in JSON format

For detailed setup instructions, see our documentation on adding custom providers.

Security

All API keys and credentials are encrypted both in transit and at rest, ensuring your sensitive information stays secure.


Filtering Traces by Annotation

We rebuilt the filtering system in observability. We added a new dropdown with more options. Additionally, we added a new annotation filtering. You can now filter and search traces based on their annotations. This feature helps you find traces with low scores or bad feedback quickly.

New Filter Options

The new dropdown is simpler and gives you more options. You can now filter by:

  • Span status: Find successful or failed spans
  • Input keys: Search for specific inputs in your spans
  • App or environment: Filter traces from specific apps or environments
  • Any key within your span: Search custom data in your trace structure

Annotation Filtering

Filter traces based on evaluations and feedback:

  • Evaluator results: Find spans evaluated by a specific evaluator
  • User feedback: Search for spans with feedback like success=True

This feature enables powerful workflows:

  1. Capture user feedback from your application using our API (see tutorial)
  2. Filter traces to find those with bad feedback or low scores
  3. Add them to test sets to track problematic cases
  4. Improve your prompts based on real user feedback

The filtering system makes it easy to turn production issues into test cases.


New Evaluation Results Dashboard

We rebuilt the evaluation results dashboard. Now you can check your results faster and see how well your AI performs.

What's New

Charts and Graphs

We added charts that show your AI's performance. You can quickly spot problems and see patterns in your data.

Compare Results Side by Side

Compare multiple tests at once. See which prompts or models work better. View charts and detailed results together.

Better Results Table

Results now show in a clean table format. It works great for small tests (10 cases) and big tests (10,000+ cases). The page loads fast no matter how much data you have.

Detailed View

Click on any result to see more details. Find out why a test passed or failed. Get the full picture of what happened.

See Your Settings

Check exactly which settings you used for each test. This helps you repeat successful tests and understand your results better.

Name Your Tests

Give your tests names and descriptions. Stay organized and help your team understand what each test does.

Deep URL Support for Sharable Links

URLs across Agenta now include workspace context, making them fully shareable between team members. This was a highly requested feature that addresses several critical issues with the previous URL structure.

What Changed

Before

  • URLs did not include workspace information
  • Sharing links between team members would redirect to the recipient's default workspace
  • Page refreshes would sometimes lose context and revert to the default workspace
  • Deep linking to specific resources was unreliable

Now

  • All URLs include the workspace context in the URL path
  • Links shared between team members work correctly, maintaining the intended workspace
  • Page refreshes maintain the correct workspace context
  • Deep linking works reliably for all resources

You can now create shareable deep links to almost any resource in Agenta:

  • Prompts: Share direct links to specific prompts in any workspace
  • Evaluations: Link directly to evaluation results and configurations
  • Test Sets: Share test sets with team members
  • Playground Sessions: Link to specific playground configurations

Speed Improvements in the Playground

We rewrote most of Agenta's frontend. You'll see much faster speeds when you create prompts or use the playground.

We also made many improvements and fixed bugs:

  • LLM-as-a-judge now uses double curly braces {{}} instead of single curly braces { and }. This matches how normal prompts work. Old LLM-as-a-judge prompts with single curly braces still work. We updated the LLM-as-a-judge playground to make editing prompts easier.
  • You can now use an external Redis instance for caching by setting it as an environment variable
  • Fixed the custom workflow quick start tutorial and examples
  • Fixed SDK compatibility issues with Python 3.9
  • Fixed default filtering in observability dashboard
  • Fixed error handling in the evaluator playground
  • Fixed the Tracing SDK to allow instrumenting streaming responses and overriding OTEL environment variables