Software used to feel separate from us. It sat behind the glass, efficient and obedient. Then it fell into our hands. It became a thing we pinched, swiped, and tapped, each gesture rewiring how we think, feel, and connect. For an entire generation, the connection to software has turned the user experience into human experience. Now, another shift is coming. Software is becoming intelligent. Instead of fixed interactions, we'll build systems that learn, adapt, and respond.
If you've worked in software long enough, you've probably lived through the situation where you write a ticket, or explain a feature in a meeting, and then a week later you look at the result and think: this is technically related to what I said, but it is not what I meant at all. Nobody considers that surprising when humans are involved. We shrug, we sigh, we clarify, we fix it.
When psychologist Paul Fitts published his 1954 paper on human motor control, he likely had no idea that his insights would one day guide the design of everything from smartphones to virtual worlds. Fitts conducted his experiments using simple physical apparatuses, such as levers, styluses, and lighted targets, to measure how quickly participants could move and point to targets of varying sizes and distances. These experiments were precursors to the pointing and selection tasks that would later define human-computer interaction.
A lot has been written about the evolution of user experience since before I ever sat in a Barnes & Noble for hours, trying to understand what the letters "H, C, and I" even meant. In the twelve years since that moment, the tools we use have matured, the rules for interaction have solidified, and the role of design has expanded. We have become a bridge connecting users, businesses, and the technologies that serve them.
The future of computing increasingly demands input devices that work seamlessly in mobile, public, and wearable contexts where traditional mice become impractical or socially awkward. As we move toward augmented reality, smart glasses, and always-connected devices, the need for subtle, continuous interaction grows more pressing. Current solutions often require bulky hardware, frequent charging, or obvious gestures that draw unwanted attention.
Cyberpsychology investigates the psychological processes related to technologically interconnected human behavior, informing disciplines such as human-computer interaction (HCI), computer science, engineering, psychology, and media and communications studies.5 The field explores how digital technologies influence and transform human cognition, emotion, and social interaction, as well as the reciprocal impact these human elements have on technologies. At its core, cyberpsychology seeks to understand the dynamic interplay between humans and technology.
The star of Meta Connect 2025 is Meta's new Ray-Ban-branded Display AI glasses, which include a heads-up display element to overlay info on the wearer's view. Meta's "neural wristband", which Zuckerberg described as "the next chapter in the exciting story of the history of computing," uses The end result is a whole new way of interacting with the digital environment, which Meta's hoping will become the foundation for its future AR and VR projects.