Yesterday, I accidentally stumbled on a partial reference to a book entitled Should You Believe Wikipedia? The topic seems somewhat dated, but I felt interested precisely because so I am on the trustfulness of Generative Artificial Intelligence, particularly Conversational AI, which today means mainly ChatGPT and Bard. I could have looked for some reviews through Google Scholar, or maybe look at the publishing house web, but I thought that asking for an abstract to ChatGPT could be easier, so I did. I prompted for an abstract of the book, specifying the whole title and subtitle, and I got it in seconds. It looked too conventional, the typical back cover text, so I prompted for abstracts by chapter, and I got them, nine short abstracts for nine chapters; then I felt curious about one of them, prompted for it, and so I get a longer abstract. There I stopped and, on the whole, the abstract didn't look mesmerizing but yes good enough to get the book. Wonderful, I thought, because this could be from now on the best and easiest way to decide about getting particular books about which I just had some scant reference. Then I bought it but didn't even open it, as I was busy with other tasks (in fact, this was a typical online distraction). It should wait on the ever-growing queue of books to read –or maybe not.
Some hours later, I had an idea: why if ask the same questions to Bard, and check the differences, if any? In fact, I had begun to use Bard the day before, when it was released in the European Union, but just for a short while. So I did, using the very same prompts, on the very same book: first, for the general abstract, which through a fast, diagonal reading did look neither better nor worse, just maybe a little shorter; then, by chapter, which I also got at high speed; now I just needed the enlarged abstract for the individual chapter, whose number or title I didn't remember, so I moved to the ChatGPT to check it: number seven.
But, know what? There was no chapter seven in the index of the book (the abstracts by chapter) provided by Bard: just six chapters, while ChatGPT had abstracted nine. Much worse, even: it wasn't simply an incomplete or partially wrong result, as far as the titles of the chapters had no resemblance at all. My first intuition was that one of these GAIs had missed the point, or the book, and ChatGPT was, of course, my usual suspect. Fortunately, I had the book and could verify, and guess what: neither nine nor six chapters and not a simple individual coincidence in their titles. Both GhatGPT and Bard had made up (hallucinated, if you like) the abstract of the book, as well as the number, the titles, and the abstracts of every chapter.
At the same time, I saw that the book was first published in 2022, which I had not noticed before. So ChatGPT, trained on data until 2021, could not know about it, but the notable fact is that notwithstanding its usual and boring reminder about this limitation, once and again, when asked for anything that it can detect as possibly more recent, a simple book title with no date is no deterrent (presumably it just had more than enough feed on the trustfulness of Wikipedia, which had been an obsession for years and for so many). It is harder to explain the makeup by Bard, even more, if you note that the book is perfectly registered in Google Books as well as in Google Scholar and, of course, plenty of libraries, bookshops, and other catalogs.