
Artificial intelligence is doing some quite amazing things that write articles, generating images, approving bar exams and even composing music. But, however powerful that it can be, it is not immune to peculiarities and problems. One of the most commented problems (and possibly misunderstood) is something called the hallucination of AI.
Recently, Dario Amodei, CEO of Anthrope (the company of the Beud, a large language model like Chatgpt), caused a conversation and brought together some experts in stating that AI models can hallucinate less than humans. In an interview, he said that, although AI can undoubtedly do things wrong, people do it too. However, his most controversial statement was that we do it more frequently.
Now, that is a fairly bold statement, and it has people in the world of their speech.
So what is a hallucination of AI?
The hallucinations of AI occur when a model like chatgpt with confidence smells with confidence information that is simply incorrect. I could tell him a historical fact that never happened, he cites a study that does not exist or describes a product characteristic that is not only real. What is especially complicated is that the response of the sounds totally credible: clear, authorized and logical. But under the hood, it is a complete fiction, and it is quite impossible to say the difference if it has no specialized knowledge.
Of course, the term “hallucination” is caused by psychology, where it describes to see or listen to things that are not really there. And, in the world of AI, it refers to when a machine is essential to “imagine” facts that are not backed by their real world training or information.
Why do these hallucinations happen?
There is not a single cause, but there are some reasons that stand out from the rest.
First, hallucinations can occur more frequently if there are data gaps or biases in the data. Or of course, AI models learn from large amounts of text that are scraped from all Internet corners, books, articles and more. So, basically, what happens is that if it is a gap in the data or if the data is inaccurate or real biased, the model ends up having to make things fill the blank spaces, so to speak.
Secondly, sometimes AI models are simply trying to guess and complete patterns. They are trained to predict the following word that will come in a prayer based on what they are before, but sometimes, the pattern they end up elaborating may sound good for AI, but it is not not that it is not that it cannot not do that it cannot not make not make precise facts.
Third and finally, we must remember that as incredible as AI may seem, it is not that it is not not understanding of the real world. It has no conscience, there is no memory (Althegh, the new models are beginning to have a memory of tight conversations) or access to updated databases unless they are specifically integrated. Essentially, they only guess what the child of the right sounds instead of evaluating and verifying the facts.
More artificial intelligence
Should we be worried?
Honestly, yes and No.
On the one hand, the hallucinations of AI can be quite harmless. If a chatbot denies you that a fictional character was born in 1856, it probably is not the end of the world. However, bets become much higher when AI is used in medicine, law, journalism or customer service.
Imagine an AI system that gives a patient inaccurate medical advice or my -replace a legal precedent, that is obviously a serious problem. And, since hallucinated responses of the thesis may sound very safe, they can be very persuasive even when they are wrong.
This is the reason why AI developers, including those of Anthrope, Openai and others, are spending a lot of time and energy trying to reduce hallucinations. They are using techniques such as augmented recovery generation (RAG), reinforcement learning with human comments (RLHF) and additional data verification layers. These methods are useful, but they are not solving the problem completely.
Why is Amodei’s comment important
When Dario Amodei says that “hallucinates less than humans,” says something worth considering: humans are also full of bias, error and misinformation. We remember things wrong, we fall in love with false news and repeat incorrect information all the time.
So, perhaps the objective is not to make AI perfect, but do better than us to collect when it could be wrong. Transparency, caution and critical thinking must bake in how we use these tools.
The final result
The hallucinations of AI are a reminder that, despite their brilliance, artificial intelligence remains a job in progress. As the models become more sophisticated, hope is that they improve to know when not to speak, or ate when to say: “I am not all.” But well, sometimes humans fight to do that (probably more than we would like to admit).
Until then, it depends on us asking questions, verifying the facts and remembering: just because something sounds intelligent does not mean that it is true, even when it comes from a robot.