Skip to main content

Filtering Traces by Annotation

We rebuilt the filtering system in observability. We added a new dropdown with more options. Additionally, we added a new annotation filtering. You can now filter and search traces based on their annotations. This feature helps you find traces with low scores or bad feedback quickly.

New Filter Options

The new dropdown is simpler and gives you more options. You can now filter by:

  • Span status: Find successful or failed spans
  • Input keys: Search for specific inputs in your spans
  • App or environment: Filter traces from specific apps or environments
  • Any key within your span: Search custom data in your trace structure

Annotation Filtering

Filter traces based on evaluations and feedback:

  • Evaluator results: Find spans evaluated by a specific evaluator
  • User feedback: Search for spans with feedback like success=True

This feature enables powerful workflows:

  1. Capture user feedback from your application using our API (see tutorial)
  2. Filter traces to find those with bad feedback or low scores
  3. Add them to test sets to track problematic cases
  4. Improve your prompts based on real user feedback

The filtering system makes it easy to turn production issues into test cases.


New Evaluation Results Dashboard

We rebuilt the evaluation results dashboard. Now you can check your results faster and see how well your AI performs.

What's New

Charts and Graphs

We added charts that show your AI's performance. You can quickly spot problems and see patterns in your data.

Compare Results Side by Side

Compare multiple tests at once. See which prompts or models work better. View charts and detailed results together.

Better Results Table

Results now show in a clean table format. It works great for small tests (10 cases) and big tests (10,000+ cases). The page loads fast no matter how much data you have.

Detailed View

Click on any result to see more details. Find out why a test passed or failed. Get the full picture of what happened.

See Your Settings

Check exactly which settings you used for each test. This helps you repeat successful tests and understand your results better.

Name Your Tests

Give your tests names and descriptions. Stay organized and help your team understand what each test does.

Deep URL Support for Sharable Links

URLs across Agenta now include workspace context, making them fully shareable between team members. This was a highly requested feature that addresses several critical issues with the previous URL structure.

What Changed

Before

  • URLs did not include workspace information
  • Sharing links between team members would redirect to the recipient's default workspace
  • Page refreshes would sometimes lose context and revert to the default workspace
  • Deep linking to specific resources was unreliable

Now

  • All URLs include the workspace context in the URL path
  • Links shared between team members work correctly, maintaining the intended workspace
  • Page refreshes maintain the correct workspace context
  • Deep linking works reliably for all resources

You can now create shareable deep links to almost any resource in Agenta:

  • Prompts: Share direct links to specific prompts in any workspace
  • Evaluations: Link directly to evaluation results and configurations
  • Test Sets: Share test sets with team members
  • Playground Sessions: Link to specific playground configurations

Speed Improvements in the Playground

We rewrote most of Agenta's frontend. You'll see much faster speeds when you create prompts or use the playground.

We also made many improvements and fixed bugs:

  • LLM-as-a-judge now uses double curly braces {{}} instead of single curly braces { and }. This matches how normal prompts work. Old LLM-as-a-judge prompts with single curly braces still work. We updated the LLM-as-a-judge playground to make editing prompts easier.
  • You can now use an external Redis instance for caching by setting it as an environment variable
  • Fixed the custom workflow quick start tutorial and examples
  • Fixed SDK compatibility issues with Python 3.9
  • Fixed default filtering in observability dashboard
  • Fixed error handling in the evaluator playground
  • Fixed the Tracing SDK to allow instrumenting streaming responses and overriding OTEL environment variables

Multiple Metrics in Human Evaluation

We spent the past months rethinking how evaluation should work. Today we're announcing one of the first big improvements.

The fastest teams building LLM apps were using human evaluation to check their outputs before going live. Agenta was helping them do this in minutes.

But we also saw that they were limited. You could only score the outputs with one metric.

That's why we rebuilt the human evaluation workflow.

Now you can set multiple evaluators and metrics and use them to score the outputs. This lets you evaluate the same output on different metrics like relevance or completeness. You can also create binary, numerical scores, or even use strings for comments or expected answer.

This unlocks a whole new set of use cases:

  • Compare your prompts on multiple metrics and understand where you can improve.
  • Turn your annotations into test sets and use them in prompt engineering. For instance, you can add comments that help you later in improve your prompts.
  • Use human evaluation to bootstrap automatic evaluation. You can annotate your outputs with the expected answer or a rubic, then use it to set up an automatic evaluation.

Watch the video below and read the post for more details. Or check out the docs to learn how to use the new human evaluation workflow.


Major Playground Improvements and Enhancements

We've made lots of improvements to the playground. Here are some of the highlights:

JSON Editor Improvements

Enhanced Error Display and Editing

The JSON editor now provides clearer error messages and improved editing functionality. We've fixed issues with error display that previously made it difficult to debug JSON configuration problems.

Undo Support with Ctrl+Z

You can now use Ctrl+Z (or Cmd+Z on Mac) to undo changes in the JSON editor, making it much easier to iterate on complex JSON configurations without fear of losing your work.

Bug Fix: JSON Field Order Preservation

The structured output JSON field order is now preserved throughout the system. This is crucial when working with LLMs that are sensitive to the ordering of JSON fields in their responses.

Previously, JSON objects might have their field order changed during processing, which could affect LLM behavior and evaluation consistency. Now, the exact order you define is maintained across all operations.

Playground Improvements

Dynamic variables

We've improved the editor behavior with dynamic variables in the prompt.

Markdown and Text View Toggle

You can now switch between markdown and text view for messages.

Collapsible Interface Elements

We've added the ability to collapse various sections of the playground interface, helping you focus on what matters most for your current task.

Collapsible Test Cases for Large Sets

When loading large test sets, you can now collapse individual test cases to better manage the interface.

Visual diff when committing changes

The playground now shows a visual diff when you're committing changes, making it easy to review exactly what modifications you're about to save.


Support for Images in the Playground

Agenta now supports images in the playground, test sets, and evaluations. This enables a systematic workflow for developing and testing applications that use vision models.

New Features:

  • Image Support in Playground: Add images directly to your prompts when experimenting in the playground.
  • Multi-modal Test Sets: Create and manage test sets that include image inputs alongside text.
  • Image-based Evaluations: Run evaluations on prompts designed to process images, allowing for systematic comparison of different prompt versions or models.

LlamaIndex Integration

We're excited to announce observability support for LlamaIndex applications.

If you're using LlamaIndex, you can now see detailed traces in Agenta to debug your application.

The integration is auto-instrumentation - just add one line of code and you'll start seeing all your LlamaIndex operations traced.

This helps when you need to understand what's happening inside your RAG pipeline, track performance bottlenecks, or debug issues in production.

Check out the tutorial and the Jupyter notebook for more details.

Annotate Your LLM Response (preview)

One of the major feature requests we had was the ability to capture user feedback and annotations (e.g. scores) to LLM responses traced in Agenta.

Today we're previewing one of a family of features around this topic.

As of today you can use the annotation API to add annotations to LLM responses traced in Agenta.

This is useful to:

  • Collect user feedback on LLM responses
  • Run custom evaluation workflows
  • Measure application performance in real-time

Check out the how to annotate traces from API for more details. Or try our new tutorial (available as jupyter notebook) here.

Other stuff:

  • We have cut our migration process to take a couple of minutes instead of an hour.

Tool Support in the Playground

We released tool usage in the Agenta playground - a key feature for anyone building agents with LLMs.

Agents need tools to access external data, perform calculations, or call APIs.

Now you can:

  • Define tools directly in the playground using JSON schema
  • Test how your prompt generates tool calls in real-time
  • Preview how your agent handles tool responses
  • Verify tool call correctness with custom evaluators

The tool schema is saved with your prompt configuration, making integration easy when you fetch configs through the API.