UX design
fromMedium
30 minutes agoThe invisible layer of UX most designers ignore
Designers must prioritize screen reader compatibility to ensure accessibility, as users rely on spoken content rather than visual elements.
Santa Cruz de Tenerife is one of the most idyllic cities in the Canary Islands. At its heart stands the jewel - the Auditorio. It's a place where talent from both worlds, New and Old, comes together. A theatre, opera, dance, and music heaven.
Instructions I created. Instructions I am continuing to hone - instructions that required me to study my own old essays, identifying what I do when I write. The sentence rhythms. The way I move between timescales. The zooming in and out from concept to detail. The instructions tell Claude how I would like ideas composed. I pull together concepts and experiences from my lived expertise to formulate a point of view - in this case, on this new AI technology.
Model Context Protocol (MCP) is a technology that enables AI models to connect with external tools and data sources (such as GitHub, Slack, databases, and documentation systems). In this article, I want to explore my top 7 favorite MCPs you can use in your design process. I will cover not only benefits but also limitations of the each MCP so you will have a clear idea about what you can & cannot do with it.
The normative form for interacting with what we think of as "AI" is something like this: there's a chat you type a question you wait for a few seconds you start seeing an answer. you start reading it you read or scan some more tens of seconds longer, while the rest of the response appears you maybe study the response in more detail you respond the loop continues
LLMs have made AI assistants a standard feature across SaaS. AI assistants allow users to instantly retrieve information and interact with a system through text-based prompts. Mathias Biilmann, in his article " Introducing AX: Why Agent Experience Matters," discusses two distinct approaches to building AI assistants. The Closed Approach involves a conversational assistant embedded directly within a single SaaS product. Examples include Zoom's AI Companion, Salesforce CRM's Einstein, and Microsoft's Copilot. The Open Approach involves external conversational assistants, such as Claude, ChatGPT, and Gemini,
Something's been slowly shifting in the design zeitgeist. I've been watching my feed on X and the vibe has changed. More and more, I see designers sharing finished experiments or prototypes they coded themselves, rather than static Figma files. Moving from working on a canvas to talking to an LLM. The conversation isn't "here's a design I made" anymore... it's "here's something I shipped this afternoon."
One skill separates good designers: the ability to clearly articulate their intention. No matter what tool you use, whether it's a traditional UI design tool like Figma or Sketch or AI tools like Figma Make, your ability to explain what you want to see accounts for 50% of your design success. The other 50% comes from your hard and soft skills. When it comes to AI-powered design, your ability to write decent prompts will have a direct impact on the quality of your design. In this guide, I want to share some specific tips and tricks that you can use for Figma Make to maximize the output.
To be honest, for many years, I was mostly reacting. Life was happening to me, rather than me shaping the life that I was living. I was making progress reactively and I was looking out for all kinds of opportunities. It was easy and quite straightforward - I was floating and jumping between projects and calls and making things work as I was going along. Years ago, my wonderful wife introduced one little annual ritual which changed that dynamic entirely.