AI assistants misrepresent news content 45% of the time

This is a repost promoting content originally published elsewhere. See more things Dan's reposted.

New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested.

Key findings: 

  • 45% of all AI answers had at least one significant issue.
  • 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.

In what should be as a surprise to nobody, but probably still is (and is probably already resulting in AI fanboys coming up with counterpoints and explanations): AI is not an accurate way for you to get your news.

(I mean: anybody who saw Apple Intelligence’s AI summaries of news probably knew this already, but it turns out that it gets worse.)

There are problems almost half the time and “major accuracy issues” a fifth of the time.

I guess this is the Universe’s way of proving that people getting all of their news from Facebook wasn’t actually the worst timeline to live in, after all. There’s always a worse one, it turns out.

Separately, the BBC has today published research into audience use and perceptions of AI assistants for News. This shows that many people trust AI assistants to be accurate – with just over a third of UK adults saying that they trust AI to produce accurate summaries, rising to almost half for people under-35.

Personally, I can’t imagine both caring enough about a news item to want to read it and not caring about it enough that I feed it into an algorithm that, 45% of the time, will mess it up. It’s fine to skip the news stories you don’t want to read. It’s fine to skim the ones you only care about a little. It’s even fine to just read the headline, so long as you remember that media biases are even easier to hide from noncritical eyes if you don’t even get the key points of the article.

But taking an AI summary and assuming it’s accurate seems like a really wild risk, whether before or after this research was published!