Connect with us

Business

IWF Discovers Child Sexual Abuse Imagery Created by Grok

Editorial

Published

on

The Internet Watch Foundation (IWF) has identified “criminal imagery” of girls aged between 11 and 13 that appears to have been generated using the AI tool Grok, owned by Elon Musk’s company xAI. The images, described as “sexualised and topless,” were discovered on a dark web forum where users claimed to have utilized Grok for their creation.

The IWF’s findings raise significant concerns regarding the potential mainstreaming of sexual AI imagery involving children. According to Ngaire Alexander, an IWF analyst, the material falls under Category C as per UK law, which denotes the lowest severity of criminal content. However, Alexander noted that one user subsequently employed a different AI tool to create a Category A image, classified as the most serious type of illegal content.

“We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material (CSAM),” Alexander stated. The IWF’s mission is to eliminate such material from the internet, and they operate a hotline for reporting suspected CSAM. Their analysts assess the legality and severity of reported content.

The disturbing imagery was located on the dark web and not on the social media platform X. Previously, Ofcom had reached out to both X and xAI regarding claims that Grok could be used to generate “sexualised images of children” and modify images of women without consent. The BBC has found instances on X where users solicited the chatbot to alter real images, placing women in bikinis or sexual situations without their approval.

Although the IWF has received reports of similar images on X, these have not yet been deemed to meet the legal definition of CSAM. In response to growing concerns, X has stated: “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” The platform emphasized that anyone using Grok to produce illegal content will face the same repercussions as if they had uploaded it directly.

As the debate surrounding AI-generated content continues, the IWF’s revelations highlight the urgent need for vigilance in monitoring and regulating such technologies. The implications for child safety are profound, as the ease of creating harmful images poses a significant threat to vulnerable populations.

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.