AI is entering one of the most human domains: healthcare. It helps people track sleep, manage chronic conditions, monitor mental health, and navigate loneliness. It listens, advises, and sometimes comforts. Yet despite these advances, hesitation remains, not because the algorithms are weak, but because the experience does not always feel reliable. In this context, trust is not just an emotional response. It is about system reliability, the confidence that an AI assistant will behave predictably, communicate clearly, and acknowledge uncertainty responsibly.
When we think about people who are deaf, we often assume stereotypes, such as "disabled" older adults with hearing aids. However, this perception is far from the truth and often leads to poor decisions and broken products. Let's look at when and how deafness emerges, and how to design better experiences for people with hearing loss. Deafness Is A Spectrum Deafness spans a broad continuum, from minor to profound hearing loss.
I experienced this firsthand on a recent project analyzing how to integrate third-party solutions into our product. It wasn't traditional UX work: these 3rd party solutions had their own UI, which meant no wireframes or prototypes. But the clarity I brought, connecting different experiences into one consolidated workflow, was precisely why they needed a designer. Leaders now have a tool (in AI) that allows them to execute their vision instantly. Many people believe that if they articulate precisely what they want, AI will build it for them.
Personalization tools allow users to configure the interface according to their preferences with customizable themes configurable shortcuts and adapted displays ensuring an optimal and intuitive user experience. Advanced search functionalities allow quick location of specific titles using keywords provider names or game mechanic characteristics while personalized recommendations based on gameplay history suggest alternatives likely to match individual preferences. The diversity of content
The framework I use for writing prompts is called Zoom-Out-Zoom-In. I start by creating a proper context for my product and explaining its target user, then zoom in on the actual screen/page design I want to generate, explaining the goal of a particular screen/page, its layout hierarchy, and the design constraints the AI should consider when generating it. Finally, I mention expectations I have about the screen that AI will generate for me.
The Designer's Playbook for AI Products The old rules still apply (mostly) Here's something that surprised me: designing for AI isn't as alien as it sounds. The fundamentals (user needs, clear feedback, intuitive flows) don't disappear just because there's a language model involved. If anything, they matter more. When the system can generate unpredictable outputs, your job as a designer is to create enough structure that users don't feel lost.
Most design problems aren't 'design' problems. They're 'Thinking' problems.They're 'Clarity' problems.They're 'Too-many-tabs-open' problems. More prototyping. More pixel-shifting. More polish in Figma alone isn't going to help you with those. For me, without clear thinking, Figma just results in more confusion, more mess, and more mockups than I can mentally manage. The Problem: Figma wasn't the bottleneck - my thinking was
AI is disrupting more than the software industry, and is doing so at a breakneck speed. Not long ago, designers were deep in Figma variables and pixel-perfect mockups. Now, tools like v0, Lovable, and Cursor are enabling instant, vibe-based prototyping that makes old methods feel almost quaint. What's coming into sharper focus isn't fidelity, it's foresight. Part of the work of Product Design today is conceptual: sensing trends, building future-proof systems, and thinking years ahead.
Ulysses was a UX designer - or a "Product Designer," or maybe a "Digital Experience Architect," or a "Product Experience Manager," depending on which day you asked him. Currently, however, he felt more like a Figma monkey. He sat in his Scandinavian designer chair, bathed in the orangey light of a dual-monitor setup (because you gotta filter out that blue light), reading yet another hot take about how AI had rendered his so-called career - which was kind of a joke, anyway - obsolete.
All of these decisions shape how users experience your product or service. And most of them happen without any input from actual users. You do the research. You create the personas. You write the reports. You give the presentations. You even make fancy infographics. And then what happens? The research sits in a shared drive somewhere, slowly gathering digital dust.
Many people in the product community, including myself, actively share their experiences using AI tools to build products. However, far fewer conversations focus on a foundational phase: product research. The quality of product research directly impacts the outcome of the entire design process. As AI tools become increasingly embedded across different phases of the product design process, including the research phase, it's vital to establish a clear, intentional research process that maximizes design efficiency while reducing business risk from poorly informed or incorrect decisions
As designers of interactive products, we are often working with or designing for a specific technology that frames our work and enables interaction between users and systems. Many designers are used to designing for mobile, web, or smart TVs, yet few know how to design with sensors. This is partly because design education tends to focus on aesthetic, usability and ergonomic aspects rather than on the technological dimensions of design or on how designers can treat technology as a design material.
I had a client recently whose biggest issue was that users would get to the product dashboard and just... not know what to do. This is one of the most common problems I see in my consulting work, and it's almost never what the client thinks it is. They assume users need tutorials. They need tooltips. They need a help center with FAQ articles. What they actually need is scaffolding.
Earlier this month, Strava, the popular fitness-tracking app, released its annual "Year in Sport" wrap-up-a cutesy, animated series of graphics summarizing each user's athletic achievements. But this year, for the first time, Strava made this feature available only to users with subscriptions ($80 per year), rather than making it free to everyone, as it had been historically since the review's debut in 2016.
Accessibility isn't a "nice-to-have" feature it's a fundamental pillar of user experience. For those of us working within the Adobe ecosystem whether you're building responsive modules in Adobe Captivate or designing resources in Illustrator here are the seven non-negotiables for your accessibility checklist. 1. Semantic Heading Structure Think of headings as the skeleton of your course. Screen reader users often "skim" a page by jumping from heading to heading to understand the hierarchy of information.
I had a client recently whose biggest issue was that users would get to the product dashboard and just... not know what to do. This is one of the most common problems I see in my consulting work, and it's almost never what the client thinks it is. They assume users need tutorials. They need tooltips. They need a help center with FAQ articles. What they actually need is scaffolding.
You are for sure wondering what the novelty we are bringing here is, right? It has been done countless times. You are right. The main idea is not complex, but the new thing is the responsive part. We will see how to dynamically adjust the overlap between the images so they can fit inside their container. And we will make some cool animations for it along the way!
To navigate is to read the world in order to move through it, whether it means scanning a crowd to find a familiar face, deciphering the logic of a bookstore's layout, or following the stars at sea. This ability has always been mediated by tools (many of them disruptive and transformative). Still, the rise of artificial intelligence presents us with a radical promise: a world where we no longer need maps, because the information or the product 'comes to us.'
Handling product and design together in my last job was a relentless game. At the point I got laid off, I was juggling five work streams at once. Without a dedicated engineering team and no designer other than myself, I was scoping, researching, analyzing data, designing, writing tickets, running alignment meetings, reviewing builds, and resourcing in relentless two-week cycles... for multiple projects. By this point, I wore the reality-altering (or "reality-checking") hat that saw design merely as one of many tasks to get through.
This step embodies a user research (UXR) mindset, systematically deconstructing assumptions through methods such as analytics review, user interviews, surveys, competitive analysis, or journey mapping. The term "ruthless" signifies a commitment to brutal honesty and a willingness to kill ideas if data reveals they are misaligned or ineffective. This audit is not a one-time event; it's a process for uncovering latent opportunities, such as unmet user needs that could drive retention or acquisition.
It's a debate I've been dragged into so many times, I've lost track: UX vs. Market Research? Qual vs. Quant? Who owns the insights? Who make the decisions? Who drives the strategy? Who makes the "real" impact? I've been a UX Researcher for over 20 years and my thinking is deeply rooted in building meaningful products and services that solve real human problems... (As opposed to fake problems... you know, the kinds of problems that we invent in order to justify the product we're building-alas, I digress).