
"Before generative AI burst onto the scene in late 2022, companies took a more or less standard approach to managing the risks introduced by AI: They developed AI ethical risk (or Responsible AI or AI Governance) programs. These programs were designed by executives and focused primarily on writing and implementing enterprise-wide AI policies that are meant to explain how the organization will live up to its AI ethics values (or principles or pillars, as they are also called)."
"When generative AI showed up, organizations updated their programs to accommodate the new technology. Now that AI agents are gaining traction, most will likely try to update yet again."
Before generative AI emerged in late 2022, companies managed AI risks by creating AI ethical risk, Responsible AI, or AI governance programs. These programs were designed by executives and centered on writing and implementing enterprise-wide AI policies. The policies were intended to explain how organizations would align with AI ethics values, principles, or pillars. After generative AI appeared, organizations revised these programs to address the new technology. As AI agents gain traction, many organizations are expected to update their programs again to accommodate the next shift in AI capabilities and associated risks.
Read at Harvard Business Review
Unable to calculate read time
Collection
[
|
...
]