You’ve most likely been fooled by AI-generated contentArtificial intelligence technology is going to be the main topic surrounding tech for years to come. It’s significant in its potential for good and potential for disaster. Well, All About Cookies performed a survey on 1,000 internet users in the US to get their thoughts on AI technology. It appears that most people have been tricked by AI content at some point.
Between AI image generation, AI text generation, AI music generation, and AI video generation, AI voice generation, there’s a lot out there that can fool people into thinking that they’re looking at actual human-created content. What makes things worse is the fact that it’s getting better as time goes on. Nowadays, generators such as DALL-E, Mid-Journey, Stable Diffusion, etc. spit out images that can fool the mass majority of the population. Also, let’s not think about the potential ramifications of Sora, OpenAI’s absolutely insane video generator, hitting the market.
So, surveys like these are important so that we can get a glimpse into what people are thinking. CEOs and executives of large AI companies are all too eager to tell you about the benefits of AI technology. However, we don’t often hear the voice of the little people, the hoi polloi. So, the results of the survey from All About Cookies do a better job of painting an accurate picture.
AI content has fooled most people in, a survey reveals
The point of this survey was to pick Americans’ brains about generative AI technology and content. The survey asked questions such as how many people have been fooled by AI content, should content creators be legally responsible for what they produce, should social media sites be able to use people’s data, and so on.
Based on the findings, it appears that AT content has cooled 77% of the survey people. That’s very unsettling. AI-generated content exists in many forms on the internet, so it’s not uncommon for people to see AI-generated content several times during the day. Some pieces of media are obvious like AI-generated images that show obvious tells. However, as the technology gets better, it’ll get much harder to differentiate between what’s real and what’s not.
Other results
The other results from the survey reveal more about the story. Of the people surveyed, 93% said that companies should be legally required to disclose when they use generative AI content. 82% said that celebrities and public figures should have protections against having their likenesses used for AI models. 72% believe that content creators should be legally responsible for AI content.
When it comes to the actual data and the use thereof, 97% of the surveyed users said that there should be protections against social media companies using their data. 45%believe that user data should be opt-in only, while 37% believe that user data should be used with exclusions or updated terms of service. Surprisingly, only 25% of people said that social media companies should not be able to use their data to train AI.
Next, when it comes to accountability, a whopping 72% of people said that people who post AI content should take legal responsibility for it. So, if the content they create makes them liable for a lawsuit, they should take the lawsuit. The company that makes the AI tool shouldn’t. A minuscule 16% said that sites that post and create AI content should share legal responsibility.
The people have spoken
It should come as no surprise that many people do not like the idea of AI content. Along with the potential for good, there is a ton of potential for bad. Users are not comfortable with handing over their data to companies, and they’re not comfortable with technology that could potentially spread misinformation. Hopefully, more services like these will come about to show how people are responding to AI technology.
2024-03-20 15:04:36