Nametag partners with Okta to monitor agents
Briefly

Nametag partners with Okta to monitor agents
"AI agents can be security issues waiting to happen. To ensure they comply with company policy, Nametag is enlisting the help of Okta. Okta research shows how widespread AI agents already are: 91 percent of organizations use them today. According to this research, only 10 percent of these companies have a mature security strategy in this area. That is a considerable risk."
"One danger that Nametag has been addressing for some time is that of deepfakes. The existing Deepfake Defense is being provided with extensive integration with Okta. AI actions that take place on behalf of an authorized employee are given a Verified Human Signature. The policy engine within Okta helps Nametag's Deepfake Defense verify the person behind the AI actions. This is done based on agentic frameworks that have rapidly become commonplace,"
AI agents are widely used—91 percent of organizations employ them—but only 10 percent have a mature security strategy, creating considerable risk. Agents can access sensitive information, purchase products, or change payment details, producing the same trust and authorization requirements as employees. Nametag Signa enforces human authorization for agentic actions by issuing a Verified Human Signature when actions occur on behalf of an authorized employee. The Verified Human Signature integrates Nametag's Deepfake Defense with Okta's policy engine. Verification leverages agentic frameworks such as the Model Context Protocol (MCP), Agent2Agent (A2A), and Agents Payments Protocol (AP2). Effective AI security requires verification of both human and non-human identities. Agentic AI enables business use, but security teams must know who is behind AI actions.
Read at Techzine Global
Unable to calculate read time
[
|
]