Trust in Healthcare AI Can Be Hurt Intentionally or Innocuously
Briefly

Trust in Healthcare AI Can Be Hurt Intentionally or Innocuously
"The race for supremacy among major artificial intelligence (AI) providers, including OpenAI, Anthropic, and Google, is approaching peak intensity. Alongside this growth, concerns about customer trust and distrust have become paramount. These concerns are appropriate-our own work suggests that in the face of the ambiguity and uncertainty typically accompanying a new technology such as healthcare AI, customers and users rely heavily on their trust in the provider to dampen risk and obtain peace of mind."
"Contrary to common belief, trust in companies is not only depleted through wanton and Machiavellian actions. Trust can be hurt, and distrust can be built, for a variety of reasons ranging from the relatively innocuous to volitionally bad actions. Below, I outline a range of factors that can affect trust among AI providers' buyers and clinical users. This framework can be extended further down the user chain to patients and their families."
The race among major AI providers is intensifying, increasing scrutiny around customer trust and distrust. Healthcare buyers and clinical users rely on provider trust to reduce perceived risk and secure peace of mind, which in turn drives consumption and long-term loyalty. Trust can be undermined not only by deliberate malfeasance but also by seemingly innocuous actions or external events. Companies should monitor and shape actions and strategies during development and deployment to avoid eroding trust and to enable course correction. A framework of factors affecting trust applies across the user chain, including patients and families.
Read at Psychology Today
Unable to calculate read time
[
|
]