The most prominent AI systems today are Large Language Models (LLMs) like ChatGPT, Claude, Grok, Perplexity, and Gemini. These systems work through computational models that mimic the human brain's structure, thus termed "neural networks." They consist of interconnected nodes that process and learn from internet data, enabling pattern recognition and decision-making in the field of artificial intelligence called "Machine Learning." LLMs are trained on massive datasets containing billions of words from books, websites, and other text sources.
The teenage brain is built to help young people explore the questions, "Who am I?" and "Where do belong?" Answering these questions isn't a solitary endeavor. It's a profoundly social one. As young people try out different versions of themselves, they watch how others respond, gathering information about what feels authentic and what doesn't. Today, many of those experiments and reflections unfold online, where algorithms and influencers play an outsized role in shaping the feedback loop.
We live in a world where it's easier than ever to surround ourselves with people who think exactly like we do. Social media bubbles, corporate cultures and even leadership teams can all become echo chambers, places where the loudest reinforcement drowns out the most valuable challenge. The problem? Echo chambers create blind spots. They emphasize what we want to hear, not what we need to hear. They boost our confidence but rarely bring clarity.
In a 327-page lawsuit filed Wednesday in the Manhattan federal court, the city alleged that Meta, Alphabet, Snap, and ByteDance created a "public nuisance" and a " youth mental health crisis" by intentionally exploiting the psychology of young users to keep them hooked. The city alleged in the complaint that the platforms' algorithms are designed to maximize engagement at the expense of children's mental health, contributing to sleep loss, chronic absenteeism, and risky behavior such as "subway surfing," meaning riding on top of moving trains.
The attention economy stokes conflict, turning social media platforms into merchants of hate. One part of this dynamic concerns upsetting stories that get to the top of the feed. But why does attention run to the latest sensational murder rather than some good-news story? Social media algorithms are designed to give the most visibility to disturbing stories. 1 However, the algorithms work as they do because of the way that the attention systems of our brains evolved.
The goal is to make buttons intuitive, easy to use, and - predictable. But is the disclosure, about participating in social media and expressing approval, full and revealing? I guess it all comes down to what you would define as a "positive experience". As I write this, two messed up, intertwined things are happening. Both can be directly linked to how the engagement dynamics of social media, driven by technology such as "like" buttons, has negatively impacted global politics.
Too many people use the word 'but' in relation to what happened overnight. "It's extraordinarily easy to condemn violent acts against somebody with whom you share their views. "It is much more important that we are consistent in terms of calling it out when it's against somebody whose work, whose views differ to us."
If you've never heard the term, ragebait marketing is simple: a brand does something polarizing or controversial - sometimes accidentally but often intentionally - with the goal of going viral by wreaking havoc in the comments and inspiring think pieces and millions of dollars in free publicity. And the truth is, it works - at least on the surface, if you measure the success of a campaign in views.