One of the biggest examples in the commercial consumer industry is GPS maps. Once those were introduced, when you study cognitive performance, people would lose spatial knowledge and spatial memory in cities that they're not familiar with - just by relying on GPS systems. And we're starting to see some of those things with AI in healthcare," Amarasingham explained.
As AI adoption accelerates, the consequences-intended and not-are becoming harder to ignore. From biased algorithms to opaque decision-making and chatbot misinformation, companies are increasingly exposed to legal, reputational, and ethical risks. And with the rollback of federal regulation, many are navigating this landscape with fewer guardrails. But fewer guardrails doesn't mean fewer consequences-only that the burden of responsibility shifts more squarely onto the businesses deploying these systems. Legal, financial, and reputational risks haven't disappeared; they've just moved upstream.
They grew up with algorithms and screens mediating their social interactions, dating relationships, and now their learning. And that's why they desperately need to learn how to be human. The most alarming pattern I've researched and observed isn't AI dependency. It's the parroting effect. AI systems are trained on statistical pattern matching, serving up widely represented viewpoints that harbor implicit bias. Without explicit instructions, they default to whatever keeps users engaged - just like social media algorithms that have already polarized our society.
Like many students, Nicole Acevedo has come to rely on artificial intelligence. The 15-year-old recently used it to help write her speech for her quinciñera. When she waits too long on completing homework, Nicole admitted, she leans on the technology so she can hand assignments in on time. Her school, located in the Greenpoint/Williamsburg area of Brooklyn, has also embraced artificial intelligence. But it is hoping to harness it in ways that supplement learning rather than supplant it.