To Err Is AI: Bubble Burst?
- Independent Ink

- Oct 12
- 4 min read

The AI errors came to light when a Sydney researcher spotted the mistakes and alerted the media. Days after the news went viral, newspapers worldwide, including The Guardian, carried the story. Meanwhile, Deloitte, in a press statement, confirmed that “some footnotes and references were incorrect”.
By Ajith Pillai in Chennai
Artificial Intelligence (AI) has been touted as the new miracle technology. We have been led to believe that it is the innovation that will radically change the way we live and work. AI, the pundits said, would disrupt work and render millions of professionals jobless. Remarkable stories about how AI has assisted lawyers, doctors, writers, commercial artists, software programmers, filmmakers, musicians, students, and researchers have been much written about.
But as of today, how far can we trust AI?
Last week, the Australian arm of the international consulting firm Deloitte learned the hard way that AI cannot be relied upon blindly. According to the Australian Broadcasting Corporation (ABC) News, the company had to refund part of the 440,000 Australian dollars it had been contracted to provide a report to the Department of Employment and Workplace Relations.
ABC News, quoting an Associated Press (AP) report, noted that the report was “littered with AI-generated errors, including a fabricated quote from a federal court judgment and references to non-existent academic research papers”.
The AI errors came to light when a Sydney researcher spotted the mistakes and alerted the media. Days after the news went viral, newspapers worldwide, including The Guardian, carried the story. Meanwhile, Deloitte, in a press statement, confirmed that “some footnotes and references were incorrect”.
In the revised version of the report, quotes attributed to a federal court judge were removed. Similarly, references made to non-existent reports attributed to legal and software engineering experts were deleted. The revised report included a disclosure that a generative AI language system, Azure OpenAI, was used.
When an AI system cooks up facts, fabricates quotes from experts, books or research papers, it is said to be hallucinating. The IBM newsletter Think (a weekly update on AI, cloud, security and sustainability industry news) defines AI hallucination as, “a phenomenon where, in a large language model (LLM), often a generative AI chatbot or computer vision tool, perceives patterns or objects that are non-existent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
This brings us to the reliability quotient of these systems and tools. Can we go by the inputs provided by AI? We cannot be sure. We have no option but to verify the ‘facts’ and references generated by AI tools before incorporating them into a report.
The IBM newsletter cites examples of AI hallucinations. Here are three of them:
*Google’s Bard chatbot incorrectly claimed that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system.
*Microsoft’s chat AI, Sydney, admitted to falling in love with users and spying on Microsoft’s search engine Bing’s employees.
*Meta pulled out its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.
Closer home, poet Meena Kandasamy recently discovered that a poem which she never wrote was in the English syllabus of the University of Mysore. Quoting Kandasamy, an article in The Wire notes that the poem ‘Caste Out’, purportedly penned by her, created several problems for the poet. People apparently wanted copies of the poem.
Meanwhile, video tutorials on YouTube enlightened students with a summary of ‘Caste Out’! Kandasamy, according to reports, had to spend much of her time with the prosaic task of asking YouTubers to take down videos featuring the imagined poem.
Not only that, reportedly, the poet also discovered that she was being quoted in several research papers and blogs, with statements attributed to her. These were quotes conjured up by AI with citations. Obviously, no one did any cross-checking of the sources, which genuine research demands.

A fortnight ago, this correspondent met a retired professor from JNU who said he was taken aback to see various statements attributed to him in the research papers submitted to his erstwhile university. On questioning some of the researchers, they told him that they had sought the help of AI and that his quotes “sounded as if they were his”.
Despite this unreliability and lack of accountability (the organisation using AI has to take the hit for its errors), billions of dollars are being invested in AI companies. A UN Trade and Development report predicts that the AI market will touch $4.8 trillion by 2033. Last year $252.3 billion was reportedly invested in the sector.
However, is the AI boom for real?

Last week, the IMF and the Bank of England anticipated that the AI bubble would burst sooner rather than later. They were echoing what bankers like JP Morgan and Goldman Sachs have been saying in recent weeks: that AI may not yet usher in a technological revolution.
A global market crash is what they see on the horizon.
According to experts, the boom may be coming to an end, but AI is here to stay. However, these tools will have to incorporate several changes before they can actually execute tasks reliably.
Ajith Pillai is a seasoned journalist and author based in Chennai.



