This product is not supported for your selected
Datadog site. (
).
Overview
LLM Observability offers several ways to support evaluations. They can be configured by navigating to AI Observability > Settings > Evaluations.
Custom LLM-as-a-judge evaluations
Custom LLM-as-a-judge evaluations allow you to define your own evaluation logic using natural language prompts. You can create custom evaluations to assess subjective or objective criteria (like tone, helpfulness, or factuality) and run them at scale across your traces and spans.
Managed evaluations
Datadog builds and supports managed evaluations to support common use cases. You can enable and configure them within the LLM Observability application.
Submit external evaluations
You can also submit external evaluations using Datadog’s API. This mechanism is great if you have your own evaluation system, but would like to centralize that information within Datadog.
Evaluation integrations
Datadog also supports integrations with some 3rd party evaluation frameworks, such as Ragas and NeMo.
Sensitive Data Scanner integration
In addition to evaluating the input and output of LLM requests, agents, workflows, or the application, LLM Observability integrates with Sensitive Data Scanner, which helps prevent data leakage by identifying and redacting any sensitive information.
Security
Get real-time security guardrails for your AI apps and agents
AI Guard helps secure your AI apps and agents in real time against prompt injection, jailbreaking, tool misuse, and sensitive data exfiltration attacks. Try it today!
JOIN THE PREVIEWPermissions
LLM Observability Write permissions are necessary to configure evaluations.