Business
IWF Discovers Child Sexual Abuse Imagery Created by Grok
The Internet Watch Foundation (IWF) has identified “criminal imagery” of girls aged between 11 and 13 that appears to have been generated using the AI tool Grok, owned by Elon Musk’s company xAI. The images, described as “sexualised and topless,” were discovered on a dark web forum where users claimed to have utilized Grok for their creation.
The IWF’s findings raise significant concerns regarding the potential mainstreaming of sexual AI imagery involving children. According to Ngaire Alexander, an IWF analyst, the material falls under Category C as per UK law, which denotes the lowest severity of criminal content. However, Alexander noted that one user subsequently employed a different AI tool to create a Category A image, classified as the most serious type of illegal content.
“We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material (CSAM),” Alexander stated. The IWF’s mission is to eliminate such material from the internet, and they operate a hotline for reporting suspected CSAM. Their analysts assess the legality and severity of reported content.
The disturbing imagery was located on the dark web and not on the social media platform X. Previously, Ofcom had reached out to both X and xAI regarding claims that Grok could be used to generate “sexualised images of children” and modify images of women without consent. The BBC has found instances on X where users solicited the chatbot to alter real images, placing women in bikinis or sexual situations without their approval.
Although the IWF has received reports of similar images on X, these have not yet been deemed to meet the legal definition of CSAM. In response to growing concerns, X has stated: “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” The platform emphasized that anyone using Grok to produce illegal content will face the same repercussions as if they had uploaded it directly.
As the debate surrounding AI-generated content continues, the IWF’s revelations highlight the urgent need for vigilance in monitoring and regulating such technologies. The implications for child safety are profound, as the ease of creating harmful images poses a significant threat to vulnerable populations.
-
World4 months agoCoronation Street’s Shocking Murder Twist Reveals Family Secrets
-
Entertainment4 months agoAndrew Pierce Confirms Departure from ITV’s Good Morning Britain
-
Health7 months agoKatie Price Faces New Health Concerns After Cancer Symptoms Resurface
-
Health2 months agoSue Radford Reveals Weight Loss Journey, Shedding 12–13 kg
-
Entertainment8 months agoKate Garraway Sells £2 Million Home Amid Financial Struggles
-
Entertainment3 weeks agoJordan Brook Faces Health Crisis in Hospital as Sophie Kasaei Stays Away
-
World5 months agoEastEnders’ Nicola Mitchell Faces Unexpected Pregnancy Crisis
-
World5 months agoBailey Announces Heartbreaking Split from Rebecca After Reunion
-
Entertainment7 months agoAnn Ming Reflects on ITV’s ‘I Fought the Law’ Drama
-
Entertainment2 months agoSelena Gomez’s Name Linked to Epstein: Examining the Claims
-
Health7 months agoTOWIE Stars Sophie Kasaei and Jordan Brook Pursue Fertility Treatment
-
Health7 months agoFiona Phillips’ Husband Shares Heartbreaking Update on Her Health
