Launch Week #2 Day 5: Jinja2 Prompt Templates
Launch Week #2 Day 5: Jinja2 Prompt Templates
Launch Week #2 Day 5: Jinja2 Prompt Templates
Agenta prompt playground now supports Jinja2 prompt templates. Create dynamic LLM prompts with conditional logic. Prompt management with Jinja2 templating support.
Agenta prompt playground now supports Jinja2 prompt templates. Create dynamic LLM prompts with conditional logic. Prompt management with Jinja2 templating support.
Agenta prompt playground now supports Jinja2 prompt templates. Create dynamic LLM prompts with conditional logic. Prompt management with Jinja2 templating support.
Nov 14, 2025
Nov 14, 2025
-
5 minutes
5 minutes



Ship reliable AI apps faster
Agenta is the open-source LLMOps platform: prompt management, evals, and LLM observability all in one place.
Hello everyone, and welcome to the final day of our launch week!
Before we get to today's announcement, here's a quick recap of what we launched this week:
Day 1: Evaluation Dashboard for comprehensive evaluation insights
Day 2: Online Evaluation for production monitoring
Day 3: Evaluation SDK for programmatic evaluation workflows
Day 4: We open sourced all core evaluation functionality in Agenta
Today: Jinja2 Template Support in the Playground
We're excited to announce a powerful update to the Agenta playground. You can now use Jinja2 templating in your prompts.
This means you can add sophisticated logic directly into your prompt templates. Use conditional statements, apply filters to variables, and transform data on the fly.
Example
Here's a prompt template that uses Jinja2 to adapt based on user expertise level:
Note: The {% if False %} block makes variables available to the playground without including them in the final prompt.
Using Jinja2 Prompts
When you fetch a Jinja2 prompt via the SDK, you get the template format included in the configuration:
{ "prompt": { "messages": [ { "role": "user", "content": "You are {% if expertise_level == \"beginner\" %}a friendly teacher...{% endif %}" } ], "llm_config": { "model": "gpt-4", "temperature": 0.7 }, "template_format": "jinja2" } }
The template_format field tells Agenta how to process your variables. This works both when invoking prompts through Agenta as an LLM gateway and when fetching prompts programmatically via the SDK.
That wraps up our launch week! We hope you explore these updates and find them useful. If you try them out, we'd love your feedback.
You can check our roadmap and vote on upcoming features at https://docs.agenta.ai/roadmap. You can also request new features directly on GitHub.
One last thing: We're launching on Product Hunt on Friday, November 28. We'd really appreciate your support! You can follow our page now at Product Hunt.
Thanks, and happy prompting!
Hello everyone, and welcome to the final day of our launch week!
Before we get to today's announcement, here's a quick recap of what we launched this week:
Day 1: Evaluation Dashboard for comprehensive evaluation insights
Day 2: Online Evaluation for production monitoring
Day 3: Evaluation SDK for programmatic evaluation workflows
Day 4: We open sourced all core evaluation functionality in Agenta
Today: Jinja2 Template Support in the Playground
We're excited to announce a powerful update to the Agenta playground. You can now use Jinja2 templating in your prompts.
This means you can add sophisticated logic directly into your prompt templates. Use conditional statements, apply filters to variables, and transform data on the fly.
Example
Here's a prompt template that uses Jinja2 to adapt based on user expertise level:
Note: The {% if False %} block makes variables available to the playground without including them in the final prompt.
Using Jinja2 Prompts
When you fetch a Jinja2 prompt via the SDK, you get the template format included in the configuration:
{ "prompt": { "messages": [ { "role": "user", "content": "You are {% if expertise_level == \"beginner\" %}a friendly teacher...{% endif %}" } ], "llm_config": { "model": "gpt-4", "temperature": 0.7 }, "template_format": "jinja2" } }
The template_format field tells Agenta how to process your variables. This works both when invoking prompts through Agenta as an LLM gateway and when fetching prompts programmatically via the SDK.
That wraps up our launch week! We hope you explore these updates and find them useful. If you try them out, we'd love your feedback.
You can check our roadmap and vote on upcoming features at https://docs.agenta.ai/roadmap. You can also request new features directly on GitHub.
One last thing: We're launching on Product Hunt on Friday, November 28. We'd really appreciate your support! You can follow our page now at Product Hunt.
Thanks, and happy prompting!
Hello everyone, and welcome to the final day of our launch week!
Before we get to today's announcement, here's a quick recap of what we launched this week:
Day 1: Evaluation Dashboard for comprehensive evaluation insights
Day 2: Online Evaluation for production monitoring
Day 3: Evaluation SDK for programmatic evaluation workflows
Day 4: We open sourced all core evaluation functionality in Agenta
Today: Jinja2 Template Support in the Playground
We're excited to announce a powerful update to the Agenta playground. You can now use Jinja2 templating in your prompts.
This means you can add sophisticated logic directly into your prompt templates. Use conditional statements, apply filters to variables, and transform data on the fly.
Example
Here's a prompt template that uses Jinja2 to adapt based on user expertise level:
Note: The {% if False %} block makes variables available to the playground without including them in the final prompt.
Using Jinja2 Prompts
When you fetch a Jinja2 prompt via the SDK, you get the template format included in the configuration:
{ "prompt": { "messages": [ { "role": "user", "content": "You are {% if expertise_level == \"beginner\" %}a friendly teacher...{% endif %}" } ], "llm_config": { "model": "gpt-4", "temperature": 0.7 }, "template_format": "jinja2" } }
The template_format field tells Agenta how to process your variables. This works both when invoking prompts through Agenta as an LLM gateway and when fetching prompts programmatically via the SDK.
That wraps up our launch week! We hope you explore these updates and find them useful. If you try them out, we'd love your feedback.
You can check our roadmap and vote on upcoming features at https://docs.agenta.ai/roadmap. You can also request new features directly on GitHub.
One last thing: We're launching on Product Hunt on Friday, November 28. We'd really appreciate your support! You can follow our page now at Product Hunt.
Thanks, and happy prompting!
More from the Blog
More from the Blog
More from the Blog
The latest updates and insights from Agenta
The latest updates and insights from Agenta
The latest updates and insights from Agenta
Ship reliable agents faster with Agenta
Build reliable LLM apps together with integrated prompt
management, evaluation, and observability.

Ship reliable agents faster with Agenta
Build reliable LLM apps together with integrated prompt
management, evaluation, and observability.
Ship reliable agents faster with Agenta
Build reliable LLM apps together with integrated prompt
management, evaluation, and observability.

Copyright © 2020 - 2060 Agentatech UG
Copyright © 2020 - 2060 Agentatech UG
Copyright © 2020 - 2060 Agentatech UG




