Skip to main content

LlamaIndex Integration

We're excited to announce observability support for LlamaIndex applications.

If you're using LlamaIndex, you can now see detailed traces in Agenta to debug your application.

The integration is auto-instrumentation - just add one line of code and you'll start seeing all your LlamaIndex operations traced.

This helps when you need to understand what's happening inside your RAG pipeline, track performance bottlenecks, or debug issues in production.

Check out the tutorial and the Jupyter notebook for more details.

Annotate Your LLM Response (preview)

One of the major feature requests we had was the ability to capture user feedback and annotations (e.g. scores) to LLM responses traced in Agenta.

Today we're previewing one of a family of features around this topic.

As of today you can use the annotation API to add annotations to LLM responses traced in Agenta.

This is useful to:

  • Collect user feedback on LLM responses
  • Run custom evaluation workflows
  • Measure application performance in real-time

Check out the how to annotate traces from API for more details. Or try our new tutorial (available as jupyter notebook) here.

Other stuff:

  • We have cut our migration process to take a couple of minutes instead of an hour.

Tool Support in the Playground

We released tool usage in the Agenta playground - a key feature for anyone building agents with LLMs.

Agents need tools to access external data, perform calculations, or call APIs.

Now you can:

  • Define tools directly in the playground using JSON schema
  • Test how your prompt generates tool calls in real-time
  • Preview how your agent handles tool responses
  • Verify tool call correctness with custom evaluators

The tool schema is saved with your prompt configuration, making integration easy when you fetch configs through the API.


Documentation Overhaul, New Models, and Platform Improvements

We've made significant improvements across Agenta with a major documentation overhaul, new model support, self-hosting enhancements, and UI improvements.

Revamped Prompt Engineering Documentation:

We've completely rewritten our prompt management and prompt engineering documentation.

Start exploring the new documentation in our updated Quick Start Guide.

New Model Support:

Our platform now supports several new LLM models:

  • Google's Gemini 2.5 Pro and Flash
  • Alibaba Cloud's Qwen 3
  • OpenAI's GPT-4.1

These models are available in both the playground and through the API.

Playground Enhancements:

We've added a draft state to the playground, providing a better editing experience. Changes are now clearly marked as drafts until committed.

Self-Hosting Improvements:

We've significantly simplified the self-hosting experience by changing how environment variables are handled in the frontend:

  • No more rebuilding images to change ports or domains
  • Dynamic configuration through environment variables at runtime

Check out our updated self-hosting documentation for details.

Bug Fixes and Optimizations:

  • Fixed OpenTelemetry integration edge cases
  • Resolved edge cases in the API that affected certain workflow configurations
  • Improved UI responsiveness and fixed minor visual inconsistencies
  • Added chat support in cloud

We are SOC 2 Type 2 Certified

We are SOC 2 Type 2 Certified. This means that our platform is audited and certified by an independent third party to meet the highest standards of security and compliance.


Structured Output Support in the Playground

We now support structured output support in the playground. You can define the expected output format and validate the output against it.

With Agenta's playground, implementing structured outputs is straightforward:

  • Open any prompt

  • Switch the Response format dropdown from text to JSON mode or JSON Schema

  • Paste or write your schema (Agenta supports the full JSON Schema specification)

  • Run the prompt - the response panel will show the response beautified

  • Commit the changes - the schema will be saved with your prompt, so when your SDK fetches the prompt, it will include the schema information

Check out the blog post for more detail https://agenta.ai/blog/structured-outputs-playground


New Feature: Prompt and Deployment Registry

We've introduced the Prompt and Deployment Registry, giving you a centralized place to manage all variants and versions of your prompts and deployments.

Key capabilities:

  • View all variants and revisions in a single table
  • Access all commits made to a variant
  • Use older versions of variants directly in the playground

Learn more in our blog post.

Bug Fixes

  • Fixed minor UI issues with dots in sidebar menu
  • Fixed minor playground UI issues
  • Fixed playground reset default model name
  • Fixed project_id issue on testset detail page
  • Fixed breaking issues with old variants encountered during QA
  • Fixed variant naming logic

Improvements to the Playground and Custom Workflows

We've made several improvements to the playground, including:

  • Improved scrolling behavior
  • Increased discoverability of variants creation and comparison
  • Implemented stop functionality in the playground

As for custom workflows, now they work with sub-routes. This means you can have multiple routes in one file and create multiple custom workflows from the same file.


OpenTelemetry Compliance and Custom workflows from API

We've introduced major improvements to Agenta, focusing on OpenTelemetry compliance and simplified custom workflow debugging.

OpenTelemetry (OTel) Support:

Agenta is now fully OpenTelemetry-compliant. This means you can seamlessly integrate Agenta with thousands of OTel-compatible services using existing SDKs. To integrate your application with Agenta, simply configure an OTel exporter pointing to your Agenta endpoint—no additional setup required.

We've enhanced distributed tracing capabilities to better debug complex distributed agent systems. All HTTP interactions between agents—whether running within Agenta's SDK or externally—are automatically traced, making troubleshooting and monitoring easier.

Detailed instructions and examples are available in our distributed tracing documentation.

Improved Custom Workflows:

Based on your feedback, we've streamlined debugging and running custom workflows:

  • Run workflows from your environments: You no longer need the Agenta CLI to manage custom workflows. Setting up custom workflows now involves simply adding the Agenta SDK to your code, creating an endpoint, and connecting it to Agenta via the web UI. You can check how it's done in the quick start guide.

  • Custom Workflows in the new playground: Custom workflows are now fully compatible with the new playground. You can now nest configurations, run side-by-side comparisons, and debug your agents and complex workflows very easily.


New Playground


We've rebuilt our playground from scratch to make prompt engineering faster and more intuitive. The old playground took 20 seconds to create a prompt - now it's instant.

Key improvements:

  • Create prompts with multiple messages using our new template system
  • Format variables easily with curly bracket syntax and a built-in validator
  • Switch between chat and completion prompts in one interface
  • Load test sets directly in the playground to iterate faster
  • Save successful outputs as test cases with one click
  • Compare different prompts side-by-side
  • Deploy changes straight to production

For developers, now you create prompts programmatically through our API.

You can explore these features in our updated playground documentation.