AI Keeps Hallucinating – But Hey, It Sounds Really Sure About It

Hotstar in UAE
Hotstar in UAE

How much can you trust AI? When you ask it a question, it spits the answers back to you in such a confident manner that it’s hard not to take it at its word. In fact, with each update, AI models are becoming smarter. Unfortunately, it also seems that there are increasing reports of AI hallucinations.

AI hallucinations are increasing

A recent investigation by The New York Times found that AI hallucinations are increasing. This is despite the fact that many AI models are getting smarter. Many are much better at solving math problems than they were before. Some are capable of performing deep research, but they still can’t get their facts straight.

The report cites an instance where an AI bot that handles tech support for a company called Cursor told customers they were no longer allowed to use Cursor on multiple computers. Policy changes are normal. But in this case, they were flat out wrong. Cursor contacted customers, informing them that no such policy change existed.

In fact, OpenAI’s ChatGPT tests have found that its newer models, which are supposed to be better, hallucinate at a higher rate than its older models. For example, OpenAI’s o3 model hallucinated 33% of the time when running the PersonQA benchmark test. That’s twice as much as the o1 model. And o4-mini? That’s even worse—it hallucinated at a rate of 48%.

So the question is, can hallucinations be fixed? According to Amr Awadallah, the chief executive of Vectara and a former Google executive, no. Awadallah says, “Despite our best efforts, they will always hallucinate. That will never go away.”

What are hallucinations?

For those unfamiliar with the concept of AI hallucinations, it’s an instance where an AI model makes up facts on the spot, seemingly out of nowhere. Some have theorized that some AI models hallucinate because of the amount of reasoning they have to do.

Simpler AI models rely on high-confidence predictions, meaning they take fewer risks. However, some AI models that incorporate reasoning capabilities must evaluate different possibilities. In some cases, they might even have to improvise, which can lead to them making things up.

This is why, if you do use AI for research purposes, it’s probably a good idea to double-check certain facts. This seems counterintuitive but if you are using AI to do research for work or school, then maybe taking a little bit more time to get it right will go a long way.

2025-05-12 15:05:22

Leave a Comment