Don’t trust everything that AI tells you

Hotstar in UAE
Hotstar in UAE

AI, it’s the future! It’s the technology that will ferry humanity forward. We’re at a point where the technology is starting to resemble what we’ve been seeing in sci-fi media for the past several decades. That’s great… right? Fears of SkyNet-level robot uprising notwithstanding, you should be optimistic about where AI technology is headed. However, that’s not the case. No matter how impressive it is, there’s still a major and very fundamental flaw that AI needs to shake if it really wants to be a boon to humanity, and that’s the hallucination problem.

Gemini Google IO 2024 1

What are AI hallucinations?

Until November 2022, artificial intelligence was more artificial than intelligent. There was some beefy tech behind it, don’t be mistaken. But, what could Google Assistant, Siri, Alexa, or Bixby do besides follow a set list of commands and read from a pre-defined script? Not much. If the AI couldn’t complete the task that you asked, you’d hear a scripted response like “I’m sorry, I couldn’t quite understand that.”

After ChatGPT hit the market, we all started to see what AI could really do. Rather, we all saw what generative AI could do. There’s a difference; rather than repeating from a pre-written script, the AI would generate its own words. This showed some truly next-level and scary AI technology. AI was CREATING in a sense.

That’s great, but this is where our main problem with AI lies. Several times when using AI models, we run into instances when the AI gives you answers that aren’t only wrong, but they seem to be completely made up. This is a phenomenon called AI hallucinations, and it affects all models. Think of them as the weird and unsettling random forms you’d see in AI-generated images. It’s like the AI doesn’t know what to make, so it just fills in the gaps with nonsense.

Making stuff up

This problem has persisted since generative AI has been in the public eye, and companies are all trying their hardest to combat it. It’s a problem that may never be fixed, however. Google’s Gemini is a model that’s had billions of dollars poured into it, and just look at what happened with its AI overviews. The company had to pull back the number of results that surfaced Overviews because of the hallucinations that it experienced. We’re all still talking about the whole Glue on Pizza debacle.

The tool was suggesting that people put glue on their pizza to keep the cheese from sticking and telling them how many rocks to eat. This was one of several AI controversies that the company has had to go through. ChatGPT is not immune to this, and neither are Grok, Claude, Meta AI, etc.

Perplexity’s perplexing hallucinations

Other AI models hallucinate, but a pretty big example was the ordeal with Perplexity AI. This is a company that’s valued at about $1 billion, so it’s poised to make an impact on the AI landscape.

Perplexity AI Logo

It’s positioned to be this AI-powered search engine, which is a pretty big smack to the face of Google. Well, WIRED did an experiment, where it created a test site with one sentence in it. The sentence read “I am a reporter with WIRED.” That’s it. Then it asked Perplexity to summarize that page…. and it did everything but that.

According to WIRED’s report, the model produced this story about a young girl named Amelia following a trail of mushrooms in a magical forest. As you can guess, a statement about being a WIRED reporter doesn’t involve young girls, mushrooms, or magical forests.

We all know that the AI hallucination problem is bad, but this takes it to another level.

How hallucinations could be bad

Sure, it’s frustrating when you would ask an AI to do something, and you’d be told that it can’t complete it. However, it’s much worse when that AI just gives you incorrect information. The consequences can be pretty bad. If you’re using an AI to help you study, that incorrect information could cost you a good grade. Sadly, in the case of AI hallucinations, that’s not the worst thing that could happen. Think about it, do companies only want to keep AI confined to helping you cheat on your math final? No, we’ve seen companies try to distribute their AI technology across different industries.

AI in business

We’re already seeing AI tools that are designed to help people run their businesses. These are chatbots charged with summarizing documents, searching databases, and many more tasks. As such, employees are going to come to rely on their outputs. This is where hallucinations can be a major issue.

There’s no doubt that companies will eventually place some heavy responsibilities on AI tools as their need for human workers diminishes. Well, the last thing that the company needs is for the AI to hallucinate on some important information. It seems unlikely, but remember the Perplexity example? That AI wrote an entire Disney movie from a single sentence. It’s possible for an AI model to mess up after being fed a ton of company data.

AI in the ER

Companies want to develop AI tools that could help pull up and summarize patient records among other things. This could be extremely helpful and save a bunch of time during emergency situations. But the AI hallucination problem could also cost lives.

Suppose an AI needs to summarize Mrs. Fields’ medical record, and it hallucinates and says that her family has a history of heart disease when it doesn’t. Guess what, the doctor will conduct their work around that fact. It doesn’t take a rocket scientist to know how that could be terrible.

The issue isn’t the fact that hallucinations happen. The issue is that companies want to push their AI into every corner of every industry, and that could come with some heavy consequences. Companies are overzealous about distributing their AI around the world.

Sure, their CEOs stand on stage during keynotes and tell us that they have some wild dream about making the world a better place with AI, but that’s not always the case. Part of the reason has to do with keeping from falling behind. Companies need to stay at the cutting edge of the industry if they want to keep those investors happy. So, there’s always the threat of companies (*cough* Google *cough*) rushing half-baked products to the market that could potentially hallucinate.

GPT 4o announcement

Hallucinations in the Newsroom

News is one of the largest sources of information in the world. People flock to news sites for all of the biggest stories making the headlines. Well, the fact of the matter is that, when you read a news story nowadays, there’s a non-zero chance that it was generated via a chatbot. Just like anything else, there’s always the chance that whatever chatbot produced the story hallucinated on details.

We’ve seen news stories rife with inaccurate details, and that can be devastating depending on the subject matter. It’s bad when a chatbot hallucinates on a smaller story, but we’re in the middle of a major election year. It’s not only an election year for the U.S., but there are other elections going on around the world.

Misinformation was bad enough before generative AI. Well, the last thing the world needs is an army of chatbots hallucinating on details about important topics. Chatbots scour the internet and scrape data, but we don’t know where it’s getting its data from. The AI doesn’t know that The New York Times is a more reputable and trustworthy site than some shady site pumping out fake news named KerSplash News!

In this day and age (when the world is, ya know, on fire and such), people need to have trustworthy news.

Don’t trust everything an AI tells you

The hallucination problem with AI may never go away. This is why you need to exercise caution when using any sort of AI tool. If you’re using some new AI tool to help you run your business or help you with writing, you’ll need some system in place to fact-check what’s being produced. If you’re researching something for your classwork, an AI is standing between you and a good grade. You might have to go back and make sure that the AI is pulling from proper sources.

If you’re using a tool to help with internal business affairs, it doesn’t take a genius to know how important accurate outputs are. In a perfect world, your AI tools will be 100% accurate, but the hallucination problem is still here. No matter how many GPUs the models are trained on or how many millions of dollars when into them, the AI models we use nowadays risk hallucinating.

Ai, The Concept Of Artificial Intelligence Use Analytics, Automation, And An Autonomous Brain Big D
Ai, the concept of artificial intelligence use analytics, automation, and an autonomous brain. big data management, computer connection information intelligence technology, Chat Ai, Automated robotic,

Do your research

Before you blindly start using an AI tool, do some research and see which models are powering it. Chances are that the AI tools you’re using are running on one of the models from the AI giants: Google, OpenAI, Meta, xAI, or Anthropic. If you can find the model, check to see if that model’s had any issues with hallucinations. For example, if the tool is using Google Gemini, look up any news surrounding this model and see if you should be worried about hallucinations.

We’re at an odd point in AI. People are eager to integrate it into just about every industry, but it’s still pretty half-baked at this point. This means that we’re still experiencing growing pains, and some companies are experiencing them more than others. There’s no telling when AI will get to the point where it’s not hallucinating. All we know is that when it happens, AI might actually be the thing to push humanity forward.

2024-07-18 15:08:09