The title for this post came from Stephen Downes as a comment to my Mastodon post last year on the first article in this series: ChatGPT’s search results for news are ‘unpredictable’ and frequently inaccurate.
"Building on our previous research, the Tow Center for Digital Journalism conducted tests on eight generative search tools with live search features to assess their abilities to accurately retrieve and cite news content, as well as how they behave when they cannot." AI Search Has A Citation Problem - We Compared Eight AI Search Engines. They’re All Bad at Citing News.
Fascinating stuff, and some great visualizations.
But Stephen nails the problem. The authors note all the way at the bottom of their page, "While our research design may not reflect typical user behavior, it is intended to assess how generative search tools perform at a task that is easily accomplished via a traditional search engine." (emphasis mine). These chatbots are NOT search engines! They don't store "chunks of text" in any way shape or form, so it shouldn't be a surprise that they can't tell you where any string of text "originally came from". I'm not saying the authors ARE surprised, and I think this is important information for the public to have, so do spread it far and wide, but I hope you, dear reader, are not surprised by these results!