UX design
fromMedium
23 hours agoYou're not supposed to get it right
Design challenges for UX writers can be intimidating due to the pressure of making quick, impactful decisions and the emphasis on visual elements.
Santa Cruz de Tenerife is one of the most idyllic cities in the Canary Islands. At its heart stands the jewel - the Auditorio. It's a place where talent from both worlds, New and Old, comes together. A theatre, opera, dance, and music heaven.
Google recently released a new AI model, Gemini 3.1, that demonstrates great results in UI and web design tasks. I've already tested this model for web design tasks and in this article, I want to experiment with Gemini 3.1 and generate UI for a mobile application.
Instructions I created. Instructions I am continuing to hone - instructions that required me to study my own old essays, identifying what I do when I write. The sentence rhythms. The way I move between timescales. The zooming in and out from concept to detail. The instructions tell Claude how I would like ideas composed. I pull together concepts and experiences from my lived expertise to formulate a point of view - in this case, on this new AI technology.
Today we are at the cusp of revolutions in artificial intelligence, autonomous vehicles, renewable energy, and biotechnology. Each brings extraordinary promise, but each introduces more complexity, more interdependence, and more latent pathways to failure. This elevates prudence to be critical. Good design recognizes what cannot be foreseen. It acknowledges the limits of prediction and control. It builds not merely for performance, but for recovery.
Performance is a critical factor in user engagement, where even minor delays in loading can deter users. A clean and simple user interface also contributes significantly to user retention.
The normative form for interacting with what we think of as "AI" is something like this: there's a chat you type a question you wait for a few seconds you start seeing an answer. you start reading it you read or scan some more tens of seconds longer, while the rest of the response appears you maybe study the response in more detail you respond the loop continues
Something's been slowly shifting in the design zeitgeist. I've been watching my feed on X and the vibe has changed. More and more, I see designers sharing finished experiments or prototypes they coded themselves, rather than static Figma files. Moving from working on a canvas to talking to an LLM. The conversation isn't "here's a design I made" anymore... it's "here's something I shipped this afternoon."
One skill separates good designers: the ability to clearly articulate their intention. No matter what tool you use, whether it's a traditional UI design tool like Figma or Sketch or AI tools like Figma Make, your ability to explain what you want to see accounts for 50% of your design success. The other 50% comes from your hard and soft skills. When it comes to AI-powered design, your ability to write decent prompts will have a direct impact on the quality of your design. In this guide, I want to share some specific tips and tricks that you can use for Figma Make to maximize the output.
In Andor, I got chills when Mon Mothma warns the senate of a chilling truth: When we let noise, conformity, or fear dominate, we lose sight of what matters. We risk allowing the loudest voices, often the safest, the most predictable, to drown out individuality, identity, and truth. To me, this line... This line echoes a growing tension I feel in content design.
AI design tools are everywhere right now. But here's the question every designer is asking: Do they actually solve real UI problems - or just generate pretty mockups? To find out, I ran a simple experiment with one rule: no cherry-picking, no reruns - just raw, first-attempt results. I fed 10 common UI design prompts - from accessibility and error handling to minimalist layouts - into 5 different AI tools. The goal? To see which AI came closest to solving real design challenges, unfiltered.