Advertising and large language models: a new frontier influencing medical practice
Briefly

"Large Language Models (LLMs) have infiltrated healthcare and are already shaping how patients choose clinicians, investigations, and treatments. Advertising, whether overt or subtle, may affect patient choice through LLMs. This is of particular concern, as patients are becoming increasingly reliant on and trusting of medical advice provided by these systems [1]. In ophthalmology, LLMs are already used to summarise evidence, draft correspondence, augment triage systems and answer questions from patients and clinicians [2, 3]."
"Unlike traditional search engines, LLMs do not simply retrieve and rank information [4]. Given search engine optimisation (SEO) influences visibility whilst LLMs influence interpretation, there is a concerning possibility that information ecosystems could be engineered not only for information to be ranked highly, but also to be preferentially synthesised into seemingly authoritative medical advice [4]. Traditional search engines have preserved a buffer by allowing users to compare multiple sources before deciding what to trust. LLMs, however, compress this process into a single narrative response, reducing the user's ability to independently evaluate sources."
"Google's AI overview has already embedded AI-generated summaries into mainstream search, and Google has also expanded Search and Shopping ads within AI Overviews, clearly linking commercial discovery to AI-mediated answers. Standalone LLM platforms face similar challenges. OpenAI has publicly stated that it began testing ads in ChatGPT in the United States on February 9, 2026, for users logged into both Free and Go plans, with ads clearly labelled and separated from the main answer [5]."
"OpenAI also claims that ads do not influence ChatGPT's answers and that conversations with users are not provided to advertisers [5]. Unlike traditional search engines, LLM outputs are non-deterministic and shaped by patterns"
Large language models are being used in healthcare to summarize evidence, draft correspondence, augment triage systems, and answer questions for patients and clinicians. Patients increasingly rely on and trust these systems for medical advice. Advertising, whether overt or subtle, may influence patient choice through LLM-mediated outputs. Unlike traditional search engines that retrieve and rank multiple sources, LLMs compress information into a single narrative response, limiting independent evaluation. AI-generated summaries embedded in mainstream search can connect commercial discovery to AI-mediated answers. LLM platforms have begun testing labeled ads, while claims of non-influence and privacy protections may not address the broader concern that non-deterministic outputs can be shaped by learned patterns.
Read at www.nature.com
Unable to calculate read time
[
|
]