Get your news from AI? Watch out – it’s wrong almost half the time

Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- New research shows that AI chatbots often distort news stories.
- 45% of the AI responses analyzed were found to be problematic.
- The authors warn of serious political and social consequences.
A new study conducted by the European Broadcasting Union (EBU) and the BBC has found that leading AI chatbots routinely distort and misrepresent news stories. The consequence could be large-scale erosion in public trust towards news organizations and in the stability of democracy itself, the organizations warn.
Spanning 18 countries and 14 languages, the study involved professional journalists evaluating thousands of responses from ChatGPT, Copilot, Gemini, and Perplexity about recent news stories based on criteria like accuracy, sourcing, and the differentiation of fact from opinion.
Also: This free Google AI course could transform how you research and write – but act fast
The researchers found that close to half (45%) of all of the responses generated by the four AI systems “had at least one significant issue,” according to the BBC, while many (20%) “contained major accuracy issues,” such as hallucination — i.e., fabricating information and presenting it as fact — or providing outdated information. Google’s Gemini had the worst performance of all, with 76% of its responses containing significant issues, especially regarding sourcing.
Implications
The study arrives at a time when generative AI tools are encroaching upon traditional search engines as many people’s primary gateway to the internet — including, in some cases, the way they search for and engage with the news.
According to the Reuters Institute’s Digital News Report 2025, 7% of people surveyed globally said they now use AI tools to stay updated on the news; that number swelled to 15% for respondents under the age of 25. A Pew Research poll of US adults conducted in August, however, found that three-quarters of respondents never get their news from an AI chatbot.
Other recent data has shown that even though few people have total trust in the information they receive from Google’s AI Overviews feature (which uses Gemini), many of them rarely or never try to verify the accuracy of a response by clicking on its accompanying source links.
The use of AI tools to engage with the news, coupled with the unreliability of the tools themselves, could have serious social and political consequences, the EBU and BBC warn.
The new study “conclusively shows that these failings are not isolated incidents,” said EBU Media Director and Deputy Director General Jean Philip De Tender said in a statement. “They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”
The video factor
That endangerment of public trust — of the ability for the average person to conclusively distinguish fact from fiction — is compounded further by the rise of video-generating AI tools, like OpenAI’s Sora, which was released as a free app in September and was downloaded one million times in just five days.
Though OpenAI’s terms of use prohibit the depiction of any living person without their consent, users were quick to demonstrate that Sora can be prompted to depict deceased persons and other problematic AI-generated clips, such as scenes of warfare that never happened. (Videos generated by Sora come with a watermark that flits across the frame of generated videos, but some clever users have discovered ways to edit these out.)
Also: Are Sora 2 and other AI video tools risky to use? Here’s what a legal scholar says
Video has long been regarded in both social and legal circles as the ultimate form of irrefutable proof that an event actually occurred, but tools like Sora are quickly making that old model obsolete.
Even before the advent of AI-generated video or chatbots like ChatGPT and Gemini, the information ecosystem was already being balkanized and echo-chambered by social media algorithms that are designed to maximize user engagement, not to ensure users receive an optimally accurate picture of reality. Generative AI is therefore adding fuel to a fire that’s been burning for decades.
Then and now
Historically, staying up-to-date with current events required a commitment of both time and money. People subscribed to newspapers or magazines and sat with them for minutes or hours at a time to get news from human journalists they trusted.
Also: I tried the new Sora 2 to generate AI videos – and the results were pure sorcery
The burgeoning news-via-AI model has bypassed both of those traditional hurdles. Anyone with an internet connection can now receive free, quickly digestible summaries of news stories — even if, as the new EBU-BBC research shows, those summaries are riddled with inaccuracies and other major problems.
READ MORE HERE
