UX design
fromMedium
12 hours agoDesigning with AI without losing your mind
Outsourcing critical thinking to AI tools in design can undermine the quality of solutions and diminish essential skills.
In a world where audiences are flooded with content, cutting through the noise requires more than visibility. Organizations increasingly invest in storytelling and narrative strategists to shape everything from brand voice to internal alignment.
Capacity Planning is the process of right-sizing the 'Total Project Demand' with the forecasted Team Capacity. Most UX teams have no idea what their capacity is. Fewer still have a process for calculating it and using it during quarterly planning activities with their counterparts in Product Management & Engineering to ensure teams don't commit to more work than they can handle.
At some point, every UX learner realizes that having a portfolio isn't the same as having a convincing portfolio. You may have screens, wireframes, and prototypes. You may even have multiple projects. But when your work is reviewed, the feedback feels vague. "Tell me more about your process." "Why did you make this decision?" "What was the impact?" That's because a strong UX case study isn't a gallery of designs. It's an argument.
My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow. There were no defined user archetypes guiding the query creation process. Team members were essentially reverse-engineering the work: you think of a task, write a query to help the agent execute it, and cross your fingers that it aligns with the needs of a hypothetical "ideal" user - one who might not even exist.
During my eight years working in agile product development, I have watched sprints move quickly while real understanding of user problems lagged. Backlogs fill with paraphrased feedback. Interview notes sit in shared folders collecting dust. Teams make decisions based on partial memories of what users actually said. Even when the code is clean, those habits slow delivery and make it harder to build software that genuinely helps people.
My role was straightforward: write queries (prompts and tasks) that would train AI agents to engage meaningfully with users. But as a UXer, one question immediately stood out - who are these users? Without a clear understanding of who the agent is interacting with, it's nearly impossible to create realistic queries that reflect how people engage with an agent. That's when I discovered a glitch in the task flow.