Jinja2 Template Support in the Playground
We're excited to announce a powerful update to the Agenta playground. You can now use Jinja2 templating in your prompts.
This means you can add sophisticated logic directly into your prompt templates. Use conditional statements, apply filters to variables, and transform data on the fly.
Learn more in our blog post or check the documentation.
Example
Here's a prompt template that uses Jinja2 to adapt based on user expertise level:
You are {% if expertise_level == "beginner" %}a friendly teacher who explains concepts in simple terms{% else %}a technical expert providing detailed analysis{% endif %}.
Explain {{ topic }} {% if include_examples %}with practical examples{% endif %}.
{% if False %} {{expertise_level}} {{include_examples}} {% endif %}
Note: The {% if False %} block makes variables available to the playground without including them in the final prompt.
Using Jinja2 Prompts
When you fetch a Jinja2 prompt via the SDK, you get the template format included in the configuration:
{
"prompt": {
"messages": [
{
"role": "user",
"content": "You are {% if expertise_level == \"beginner\" %}a friendly teacher...{% endif %}"
}
],
"llm_config": {
"model": "gpt-4",
"temperature": 0.7
},
"template_format": "jinja2"
}
}
The template_format field tells Agenta how to process your variables. This works both when invoking prompts through Agenta as an LLM gateway and when fetching prompts programmatically via the SDK.