"Fixation on futuristic AI catastrophe scenarios is allowing companies to evade accountability for the very real harms their technology is already causing, a professor says. In an essay published this week, Tobias Osborne, a professor of theoretical physics at Leibniz Universität Hannover and a cofounder of the scientific communication firm Innovailia, said debates about superintelligent machines and a hypothetical "singularity" have become a dangerous distraction."
"The AI debate has increasingly been shaped by doomsday scenarios - including warnings that superintelligent systems could wipe out humanity by design or by accident, become uncontrollable, or trigger civilizational collapse - fears amplified by prominent AI researchers, tech leaders, and government reports. In comments to Business Insider, Osborne said the fixation on such scenarios has a concrete effect on regulation and accountability."
Fixation on hypothetical AI extinction scenarios diverts attention from measurable harms occurring now, including labor displacement, environmental impacts, and widespread copyright infringement. Treating AI firms as guardians against civilizational catastrophe shifts their status toward national-security actors and away from ordinary product vendors, reducing liability and discouraging normal regulation. That framing enables firms to externalize harms while benefiting from regulatory deference, secrecy, and public subsidies. Policymaking focused on speculative superintelligence risks weakens oversight of current technologies and obstructs accountability for tangible, present-day consequences. Urgent regulatory attention should prioritize proven harms and close accountability gaps in deployment and business practices.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]