Model Armor is a governance framework for large language models (LLMs) integrated into the Apigee API management platform. It provides enforcement of specific policies including prompt validation and output filtering at the API layer. Accessible across all Apigee tiers, it enhances security against risks such as prompt injection and data exposure without requiring changes to downstream systems. Model Armor supports various LLM providers, allowing centralized governance, and offers a tutorial for implementing its policies.
Model Armor introduces out-of-the-box enforcement for LLM-specific policies such as prompt validation, output filtering, and token-level controls at the API layer.
These policies can detect issues like jailbreak attempts and prompt injection, allowing outputs to be redacted, altered, or blocked without modifying downstream systems.
Model Armor operates directly within Apigee's proxy layer, inspecting both requests and responses using declarative policies for consistent governance.
With Model Armor, enterprises can treat LLM traffic with the same governance rigor as traditional APIs, supporting multiple LLM providers.
Collection
[
|
...
]