Weaponizing generative AI
Briefly

Developers are increasingly using AI to generate bug reports, leading to an influx of low-quality and hallucinated reports that overwhelm project maintainers, complicating security management.
According to Symbiotic Security's CEO, GenAI platforms learn from a vast pool of code, sometimes adopting unsafe coding practices, as security is not prioritized in their development.
The latest AI Safety Index indicates that major LLM developers are falling short on safety standards, with the best performer receiving only a C, signaling urgent need for improvement.
Stuart Russell emphasizes that while there is significant activity related to AI safety, it remains ineffective and lacks the ability to provide any quantitative guarantees of safety.
Read at InfoWorld
[
|
]