UX design
fromMedium
15 hours agoThe invisible layer of UX most designers ignore
Designers must prioritize screen reader compatibility to ensure accessibility, as users rely on spoken content rather than visual elements.
Santa Cruz de Tenerife is one of the most idyllic cities in the Canary Islands. At its heart stands the jewel - the Auditorio. It's a place where talent from both worlds, New and Old, comes together. A theatre, opera, dance, and music heaven.
For decades in SAAS, products reduced ambiguity. Users supplied constrained inputs, and the system handled the output. It's never been Minority Report cinematic, but it was predictable. By providing predictable environments for manipulating data, users learned by moving things, adjusting variables - and the outcome emerged through interaction.
Imagine a user opening a mental health app while feeling overwhelmed with anxiety. The very first thing they encounter is a screen with a bright, clashing colour scheme, followed by a notification shaming them for breaking a 5-day "mindfulness streak," and a paywall blocking the meditation they desperately need at that very moment. This experience isn't just poor design; it can be actively harmful. It betrays the user's vulnerability and erodes the very trust the app aims to build.
Performance is a critical factor in user engagement, where even minor delays in loading can deter users. A clean and simple user interface also contributes significantly to user retention.
AI is disrupting more than the software industry, and is doing so at a breakneck speed. Not long ago, designers were deep in Figma variables and pixel-perfect mockups. Now, tools like v0, Lovable, and Cursor are enabling instant, vibe-based prototyping that makes old methods feel almost quaint. What's coming into sharper focus isn't fidelity, it's foresight. Part of the work of Product Design today is conceptual: sensing trends, building future-proof systems, and thinking years ahead.
My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow.
Your junior designer spins up a prototype in Lovable before lunch. Your PM shows you a "working" MVP built entirely with Cursor within a day. And your CEO forwards you a LinkedIn post about how AI will replace 80% of UI work by 2026. And it seems like anyone can now make an app to solve a specific problem. Has the graphical interface really died, as Jakob Nielsen provocatively suggests?
My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow. There were no defined user archetypes guiding the query creation process. Team members were essentially reverse-engineering the work: you think of a task, write a query to help the agent execute it, and cross your fingers that it aligns with the needs of a hypothetical "ideal" user - one who might not even exist.
Autonomy is an output of a technical system. Trustworthiness is an output of a design process. Here are concrete design patterns, operational frameworks, and organizational practices for building agentic systems that are not only powerful but also transparent, controllable, and trustworthy. In the first part of this series, we established the fundamental shift from generative to agentic artificial intelligence. We explored why this leap from suggesting to acting demands a new psychological and methodological toolkit for UX researchers, product managers, and leaders.