We know that customers don't all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a score and see the outcomes.
Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations are available in preview on Azure AI.
Collection
[
|
...
]