This morning, CFA sent the following letter to key enforcement agencies following the latest news of illegal behavior by xAI:
We are writing to renew our urgent call for enforcement against xAI for creating and distributing Child Sexual Abuse Material (CSAM) and other non-consensual intimate imagery (NCII) with Generative AI on their social media platform.
On August 15, a coalition of fifteen groups led by the Consumer Federation of America (CFA) called for investigation into “Grok Imagine,” urging your offices to act to stem a dangerous and violative product that can “do nudity” and “create realistic videos of humans,” according to an xAI employee.
At the time of the complaint, the feature was part of a standalone image and video creation app that included friction for using real photos. Late last week, it became clear that xAI has chosen to allow an X (formerly Twitter) account of their “Grok” bot to create NCII of a user on command in a public feed. This allows users to “undress” any individual whose photo is posted without permission, regardless of the subjects’ age. Instead of immediate changes or remorse, the CEO of xAI Elon Musk has sought to make these features a joke. A barrage of similar images have followed, exploding on the platform and endangering people.
xAI is not only violating CSAM and NCII laws for both creation and distribution of this content, but Unfair and Deceptive Trade Practice laws, privacy laws, and more. It is essential that regulators know this is not an “AI gone rogue” or a “choice” by the chatbot. This is a result of foreseeable and purposeful choices by individuals at the company about a product they made. Those individuals can and must be held accountable.
If this does not lead to regulatory action, what will? Regulators in India, Canada, the UK, and throughout Europe have already begun to act about this illegal and unacceptable behavior. We urge your offices to immediately and publicly open investigations into the company’s actions – it is essential for user safety and trust.

