The UK's fact-checkers are sending their AI to help Americans cover elections - Poynter
Briefly

The UK's fact-checkers are sending their AI to help Americans cover elections - Poynter
"Inside a modest office near London Bridge, a small team of engineers and fact-checkers has spent a decade refining AI tools and models to do what most journalists can no longer manage: keep up. The system reads headlines, transcribes broadcasts and scans social media for claims worth verifying. It flags those most likely to mislead or cause harm. The technology, developed by Full Fact, the United Kingdom's leading fact-checking charity, is crossing the Atlantic."
"With tens of thousands of human annotations, the model learned to triage vast text streams, processing more than 300,000 sentences a day and flagging new claims that reappeared in different wording. In the past two years, larger generative models have joined the mix, estimating potential harm and catching paraphrased versions that old systems would miss. The new systems surface and group likely harmful misinformation, and fact-checkers decide what to investigate and publish."
"Founded in 2009, Full Fact began experimenting with technology long before large language models captured global attention. By 2016, its leaders had realized that human fact-checkers could no longer monitor every broadcast or platform for questionable claims on issues of public interest. When Andy Dudfield joined the organization in 2019 to lead its AI work, he found a small team already exploring how automation could help."
Full Fact's small team in London developed AI tools over a decade to monitor headlines, broadcasts and social platforms for checkable claims and potential harm. The system transcribes audio, reads headlines, scans feeds, flags claims likely to mislead, and groups paraphrased repetitions. Early work used natural language processing and a fine-tuned BERT model trained on tens of thousands of human annotations to triage more than 300,000 sentences daily. Recent integration of larger generative models enables harm estimation and catches paraphrases older systems missed. The platform surfaces likely harmful misinformation for human fact-checkers to investigate and publish. Tools are being offered to U.S. newsrooms ahead of the 2026 midterms.
Read at Poynter
Unable to calculate read time
[
|
]