Google Gemini has been used to generate AI deepfake terrorism

Hotstar in UAE
Hotstar in UAE

The massification of AI-powered services has opened the door to countless possibilities in virtually every area of ​​the tech industry and consumer market. From easily editing an image to summarizing long documents, artificial intelligence just makes things easier. However, malicious actors also have access to this technology. In a recent report, Google revealed that its AI tools have been used to generate deepfake terrorism content.

In Australia, big tech companies are required to periodically submit reports on their harm-minimization efforts regarding the use of their developments to the authorities. The Australian eSafety Commission is in charge of receiving and analyzing these reports. Repeated violations of the law expose companies to fines or potential sanctions.

Google discloses that Gemini has generated deepfake terrorism and child abuse material

Google’s latest security report covers the period from April 2023 to February 2024. According to the Australian agency, the Mountain View giant’s technology was responsible for generating AI deepfake terrorism content. Additionally, Google’s report mentions the use of Gemini to generate child abuse material.

This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,” said Julie Inman, eSafety Commissioner Julie Inman Grant.

Google says it received 258 reports of cases of AI deepfake content related to terrorism or violent extremism. There are also 86 case reports of bad actors generating child exploitation or abuse material with Gemini. Google is more strict in removing child exploitation material. The company uses a hatch-matching system to detect such images and remove them as quickly as possible. However, Google does not apply the technology to extremism-related content.

One of the main goals of regulators regarding artificial intelligence is for companies to establish stricter shields to prevent the creation of this type of material. The arrival of ChatGPT in 2022 raised the first concerns in this regard. However, years later, it seems that the issue is still present, although perhaps to a lesser extent.

The Australian Commission has already sanctioned Telegram and X

The eSafety Commission praises Google for its transparency in revealing the malicious uses that some criminal actors are making of its AI tools. The Australian eSafety Commission called Google’s report “world-first insight.” Other firms have not had favorable words from the agency. Telegram and X (FKA Twitter) received fines due to shortcomings in their reports.

2025-03-06 15:11:11

Leave a Comment