Google Exec Warns of AI Chatbot 'Hallucinations.' What Is That Supposed to Mean? Prabhakar Raghavan cautions that generative AI such as ChatGPT can be convincing but incorrect.
A leading executive at Google told a German newspaper that the current form of generative AI, such as ChatGPT, can be unreliable and enter a dreamlike, zoned-out state.
"This kind of artificial intelligence we're talking about right now can sometimes lead to something we call hallucination," Prabhakar Raghavan, senior vice president at Google and head of Google Search, told Welt am Sonntag.
"This then expresses itself in such a way that a machine provides a convincing but completely made-up answer," he said.
Indeed, many ChatGPT users, including Apple co-founder Steve Wozniak, have complained that the AI is frequently wrong.
Errors in encoding and decoding between text and representations can cause artificial intelligence hallucinations.
Ted Chiang on the "hallucinations" of ChatGPT: "if a compression algorithm is designed to reconstruct text after 99% of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated..." https://t.co/7QP6zBgrd3
— Matt Bell (@mdbell79) February 9, 2023
It was unclear whether Raghavan was referencing Google's own forays into generative AI.
Related: Are Robots Coming to Replace Us? 4 Jobs Artificial Intelligence Can't Outcompete (Yet!)
Last week, the company announced that it is testing a chatbot called Bard Apprentice. The technology is built on LaMDA technology, the same as OpenAI's large language model for ChatGPT.
The demonstration in Paris was considered a PR disaster, as investors were largely underwhelmed.
Google developers have been under intense pressure since the launch of OpenAI's ChatGPT, which has taken the world by storm and threatens Google's core business.
"We obviously feel the urgency, but we also feel the great responsibility," Raghavan told the newspaper. "We certainly don't want to mislead the public."