Neuroscientists and military vets: the inner workings of the team that hacks' Microsoft's AI tools before their public debut
Briefly

Neuroscientists and military vets: the inner workings of the team that hacks' Microsoft's AI tools before their public debut
"We have principles, we define them and we publish them. By definition, those principles create guardrails. And we stay in our lane within them. It's not just about when we should use technology, but also about when we shouldn't use it."
"The red teams were first created by armies to simulate enemy attacks and to detect vulnerabilities before a real adversary could do so. In cybersecurity, the practice has been established for decades."
"Before a product is launched, the red teams break the technology so that others can rebuild it to be more solid and secure."
Microsoft's president, Brad Smith, reflects on the ethical implications of AI in warfare during a conference. The company has established principles to guide AI usage, emphasizing the need for guardrails. Microsoft supports Anthropic's lawsuit against the Pentagon, highlighting ongoing debates in Big Tech about AI's role in defense. The company employs a red team to test and secure its AI products, a practice that has evolved from military origins to enhance cybersecurity and product integrity before launch.
Read at english.elpais.com
Unable to calculate read time
[
|
]