It’s the next frontier in technology; it’s the biggest human milestone since the internet. We’re talking about AI (artificial intelligence). AI has been with us for a long time, always a thousand steps behind what we’d see in the movies, but it took a massive leap forward with the introduction of generative AI. It’s incredible technology, we can’t disagree with that, but it’s not perfect. There are some big problems with AI, and the biggest ones either make it a threat or keep it from truly being the next greatest human achievement.
This should come as no surprise; what major technology is without its issues? The thing with AI, especially generative AI, is that there’s so much potential with it. There’s a ton of potential for good, but there’s also potential for bad. These issues highlight that potential for bad for those worried individuals among us.
Now, this piece applies to AI in general, but there’s going to be a focus on generative AI. This is the technology that, in a way, materialized lingering fears that we’ve had for years- the whole SkyNet catastrophe. Why, just a couple of years ago, the thought of something like ChatGPT seemed like something that you’d see in a Star Trek show or an MCU movie. Now, it’s here; no Hollywood smoke and mirrors.
Hallucinations
This piece will highlight some of the ethical issues surrounding AI, but we’ll start off with a technical one. Most of the time when you’re using AI, your responses will be based on some sort of existing content. The LLM will draw from information already stored within it. If ChatGPT tells you how far the earth is from the sun, it had to learn that information from somewhere.
However, there will be times when the chatbot gives you an answer that was created out of thin air. Imagine if ChatGPT told you that the Earth is 100 miles from the sun. Basically, it made up or hallucinated that answer. This is a major issue with the technology, and most people don’t realize how big it could be. Sure, a chatbot giving you a wrong answer on a school assignment is inconveniencing, but think about what people want to use AI for in the near future.
Companies want to use AI to scan through and summarize patient documents at hospitals, just to name one. You can see how that could be a disaster. People plan on using AI in some pivotal business and health applications. You need the information presented to be 100% accurate.
Phenomenon
Hallucinations aren’t as much an issue as they are a phenomenon. This problem isn’t only affecting ChatGPT; it’s affecting just about every major chatbot out there. We know about how Google’s Bard famously hallucinated the fact about the James Webb telescope during its unveiling.
So, we’re not only talking about one company’s misstep, we’re talking about what seems like a naturally occurring event in generative AI. Hallucinations are one of the biggest problems with AI, and it’s going to be a thorn in AI’s side for some time.
The impact on the education industry
The moment we found out that chatbots could generate essays, every educator’s heart skipped a beat. Chatbots like Google, ChatGPT, and most other chatbots that are based on ChatGPT have the ability to whip up a college-level academic essay in a matter of seconds. Hours of research and writing are compressed into the time it takes to take a sip of coffee.
As you can imagine, a ton of students heavily utilized this technology to breeze through their courses. What makes things worse is that ChatGPT came out when students were chest-deep in their classes. So, teachers couldn’t make any changes to their programs. We can bet that hundreds of students were able to speed through their finals with the help of Professor ChatGPT.
This, of course, poses a major problem for the education industry. Not only are students able to cheat their way through some of their classes, but professors, school boards, and other entities are in disarray trying to figure out how to deal with this situation. Some professors tried to use AI content detectors, but generative AI has advanced to the point where they’re ineffective. The only way to tell if a student generated their essay is to either catch them in the act or break them during a harsh interrogation.
As generative AI gets smarter and more accessible, we can see much of the younger generation growing dependent on the technology. We’ll need to see how this affects the education industry in the coming years.
The impact on creators
Artists, composers, and writers have a right to be fearful of generative AI. When DALL-E first hit the shelves, artists shivered a bit. Sure, back in those early days, AI images were messy uncanny nightmares that no one could take seriously (oh, those janky hands!). However, the technology has gotten better, and it’s now hard to distinguish between what’s AI-generated or created by a human.
This goes for other forms of art as well. There’s a song out there featuring The Weekend and Drake that was considered for an Emmy; it’s AI-generated. There are a ton of riveting and informative books on the Amazon Kindle store; they’re AI-generated. There are videos with great voiceovers; as you can guess, they’re AI-generated voices.
As this technology gets better, talented and well-practiced individuals will have trouble finding work for their crafts. “Why pay an artist $200 when I can just generate it in DALL-E for free?” “Why pay a composer when I can just generate some tracks for my video game?”
This is the mentality that will siphon money and jobs out of creators’ hands. It won’t affect industry professionals just yet. Don’t worry, music scoring legend Danny Elfman, the next big Hollywood production won’t pass you up for AI. However, lower-level gigs that would be fulfilled by lower-level creators will be given to AI. There’s no telling what effect this will have on young and rising creators who are trying to make a career for themselves.
The lack of regulation
Make sure you have your 6-shooter and 10-gallon hat, ’cause it’s the wild west out there! Right now, there are no solid regulations for AI technology. Governmental bodies are still trying to figure out if scraping data to train LLMs counts as copyright infringement.
Can an artist sue if someone uses their likeness? Well, the AI was trained using publically accessible content. Chatbots are trained using social media posts. Are the creators of the posts entitled to their posts? They didn’t copyright them, so they should be free game… right? Can an AI-generated work be nominated for an award? Should AI-generated books be banned, even if the information they hold is accurate?
These are the types of questions that multiple governmental bodies are trying to answer. At the time of writing this article, the Generative AI craze is about 10 months old. That might seem like a long time, but we’re talking about government organizations and lawmakers. It takes a ton of time for our political parents to mull over all of the details and come up with some regulations; 10 months isn’t enough time.
Greedy mega-corporations
One of the biggest problems with AI has to do with the people developing it. There’s nothing stopping major companies from pushing their AI technology too far. Let’s not kid ourselves, when big tech companies talk about bringing AI innovations for the sake of helping the world, most of the time, it’s a mask hiding their true motives. This isn’t a race to make humanity better, it’s a push to make the investors happy.
AI is just another industry battlefield, and these companies are fighting to stay ahead of the curb and protect their profits. If Meta’s AI does better on the market, that’s 10s of billions of dollars lost for Google, OpenAI, Microsoft, etc. If Google’s AI does better, then it means the same for the other companies. So, there’s nothing stopping companies from taking AI too far for the sake of protecting their profits that they end up hurting the end user.
Scammers, thieves, and other creeps
AI is a tool just like a hammer; it can help build amazing structures… but it can also kill. It all depends on the person wielding it. There are those who use AI for constructive reasons or just fun, but there are those who use it for their devious plans.
Right now, across the internet, there are a ton of websites created wholly or partly using AI. They’re content farms that spit out AI-generated articles in order to scoop up ad revenue. There are scams across the internet that use advertising copy created with AI and images created by AI. Chatbots can generate code; nothing’s stopping people from using that to generate harmful malware.
There’s a fleet of scumbags out there who are only interested in using AI for their own gain or spreading harm. They existed before the AI explosion, sure, but AI just basically gave them a huge power boost. They’re, unfortunately, making the internet a much more dangerous place to be.
The biggest problems with AI are holding it back
There’s still a long road ahead of us when it comes to AI safety. If AI is going to push us forward like companies claim, then we should find a solution for these problems. However, some of these problems don’t depend on AI companies; some of them depend on the people using the AI technology. Only time will tell just how far AI can take us with these issues.
2023-09-27 15:04:46