Microsoft makes building trustworthy AI agents easier and more secure
Briefly

AI agents are revolutionizing productivity by executing tasks on behalf of users, ranging from simple email management to complex business transactions. At Microsoft Build, significant advancements were presented, including features aimed at improving the security and efficiency of AI agents within Microsoft 365 and GitHub. New tools were introduced, such as Agent Evaluators for assessing agent performance and an AI Red Teaming Agent for identifying vulnerabilities through simulated attacks, highlighting the need for rigorous safety protocols in AI technology implementations.
The amazing thing about agents is that they are actually able to do so much more -- they use tools, they take actions on your behalf -- and so the space of what can go wrong is much more significant.
By using new agentic technology and turning our safety evaluation system into an agent, that will just adversarially red team your system; it is way, way easier to use, and it also results in better testing, because the feedback is immediate and actionable.
Read at ZDNET
[
|
]