
"How do we know when to trust what someone tells us? In person conversations give us many subtle cues we might pick up on, but when they happen with AI system designed to sound perfectly human, we lose any sort of frame of reference we may have. With every new model, conversational AI sounds more and more genuinely intelligent and human-like, so much so that every day, millions of people chat with these systems as if talking to their most knowledgeable friend. This creates exactly the setup for misplaced trust: trust works best when paired with critical thinking, but the more we rely on these systems, the worse we get at it, ending up in this odd feedback loop that's surprisingly difficult to escape."
"Us designers love to talk about tools, processes, and skills. We debate software, trade shortcuts, and show off case studies filled with research frameworks and clever flows. All useful things. But what rarely gets talked about and showcased in portfolios is the one thing that actually makes a designer memorable: taste."
Conversational AI increasingly mimics genuine human intelligence, erasing interpersonal cues that help assess credibility. Many users interact with these systems as if speaking to a knowledgeable friend, which cultivates misplaced trust. Trust functions best alongside critical thinking, yet growing reliance on AI reduces critical scrutiny, producing a reinforcing feedback loop that is hard to break. Design work and careers are affected: taste remains a key differentiator beyond tools, entry-level roles are disappearing and being replaced by unpaid or informal opportunities, and visual artists face wage pressure and displacement as AI lowers costs and automates creative tasks.
Read at Medium
Unable to calculate read time
Collection
[
|
...
]