At Defcon 2023, AI tech companies partnered with integrity groups for red-teaming exercises to scrutinize generative AI. Humane Intelligence is expanding this effort by inviting public participation.
This initiative aims to democratize the evaluation of AI systems, enabling users to determine if models meet their needs, highlighting the importance of transparency and ethics in AI.
The final CAMLIS event will involve participants working in teams to evaluate AI security; one side attacking and the other defending using NIST’s risk management framework.
The ambition is to conduct thorough testing on security, resilience, and ethics of generative AI technologies, pushing the boundaries of public involvement in AI evaluation.
Collection
[
|
...
]