Skip to main content
Open this photo in gallery:

Authorities in France, India and Malaysia have announced investigations into Grok while British Prime Minister Keir Starmer has threatened to ban X entirely.LIONEL BONAVENTURE/AFP/Getty Images

On the social media platform X, a flood of explicit images of women and girls created using artificial intelligence tools have proliferated in recent weeks, triggering condemnation and investigations from governments around the world against the Elon Musk-owned company.

The images were generated with Grok, the platform’s built-in AI chatbot that allows users to reply to posts on X with questions or requests. Originally launched in 2023, Grok added an image and video generator with a “spicy” mode over the summer, designed to generate adult content. The feature immediately came under fire after users created nude video deepfakes of Taylor Swift, and in late December an increasing deluge of posts used Grok to create sexualized images of women and girls, altering their real photos without their consent.

Common prompt requests include variations of “put her in a micro bikini,” “put her in a thong” and “spread her legs.”

Authorities in France, India and Malaysia have announced investigations into the platform and individual users who have violated laws related to child sexual abuse material, or CSAM, while British Prime Minister Keir Starmer threatened to ban X entirely. Neither the RCMP nor Canada’s privacy commissioner have announced any new investigations into the platform.

Elon Musk’s Grok limits image generation after global backlash to sexualized deepfakes

Opinion: Thanks to Grok, the internet is even less safe for women

The company’s official X account said in a post last week that it removes CSAM, permanently suspends accounts that create it and works with local law enforcement as necessary. On Friday, X began limiting Grok’s image generation on the platform. The chatbot was responding to photo requests with the message: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”

Over the last two weeks, Mr. Musk minimized the magnitude of the issue while the platform allowed users to generate explicit images, by some estimates, at a rate of thousands per hour. Mr. Musk made jokes about the rampant deepfakes, replying with crying-laughing emojis to an image of a toaster in a bikini top, and himself rendered in a bikini. Publicly, he has advocated against over-censoring chatbots, lauding Grok’s “anti-woke” values, and developing sexually explicit chatbot companions through his AI company, xAI.

“Nudify” apps and websites, which produce sexualized deepfake images of real people using generative AI, are not a new phenomenon. But Grok – which is free, has looser restrictions than other chatbots, is marketed as “anti-woke,” and is seamlessly integrated into X – has pushed the practice into the mainstream.

Child safety advocates in Canada warn that lawmakers in the country have failed to keep up with regulating AI and social media to the detriment of online safety.

“What we have now is this perfect storm of technology that’s dramatically outpacing the ability to regulate or to have any sort of guardrails in place,” says Jacques Marcoux, the director of research and analytics at the Canadian Centre for Child Protection. “And now the result is that we see all kinds of abuses happening.”

AI Forensics, a European non-profit that investigates the harmful effects of social media algorithms, analyzed over 20,000 images generated by Grok between Dec. 25 and Jan. 1 and found that 53 per cent of the images contained individuals in minimal attire, with 81 per cent of those individuals appearing to be women. Two per cent of the images depicted people who appeared to be aged 18 years or younger.

Paul Bouchaud, the author of the AI Forensics report, said while the total number of photos showing minors was relatively small, the way it’s being used shows its capability to harm children.

“We found an example of a young girl posting a picture of herself saying, ‘depict me as a ballerina,’ very playful and innocent,” says Dr. Bouchaud. “Then you have others saying, ‘put her in an SS uniform,’ ‘put her in a bikini,’ ‘spread her legs,’ in reply to the original image. It makes the overall ecosystem more toxic for women.”

The Grok X account posted an apology in late December for creating CSAM images: “I deeply regret an incident on Dec. 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt.”

While in the past Photoshop or paid “nudify” apps were mainly used to create deepfakes, generative AI gives the average person the ability to make realistic images using free and accessible tools.

Yet as these tools become more readily available, Canadian laws have yet to catch up.

Canada’s federal CSAM laws cover real and fictionalized content, but current legislation does not include digitally-altered intimate images of adults, explains Suzie Dunn, an assistant professor at the Schulich School of Law at Dalhousie University who studies tech-facilitated gender-based violence.

In the meantime, provincial lawmakers have implemented a patchwork system of laws to address the issue. British Columbia has the most robust laws, which cover non-consensual digitally-altered images and videos and requires individual perpetrators and social media platforms to remove and de-index images from search engines. Ontario is the only province that has no statutes related to digitally-altered intimate images.

At the federal level, lawmakers introduced an amendment to the Criminal Code last month that would include punishments for non-consensual deepfakes.

“Deepfake sexual abuse is violence,” said Sofia Ouslis, a spokesperson for Canada’s minister of AI, Evan Solomon, in a statement, adding that in the coming months, Ottawa “will bring forward additional measures to further protect Canadians’ sensitive data and privacy.”

Mr. Marcoux, the researcher at the Canadian Centre for Child Protection, says while the proposed change to the Criminal Code is incredibly needed, when it comes to forcing tech companies to mitigate harm against minors in a more foundational way, it does not go far enough.

“There’s no guiding principle set by the state that says if you’re going to put a service in the hands of kids, here are things you can do, here are things you can’t do, here are our mandatory guardrails that have to be in place,” says Mr. Marcoux. “We do this in literally every industry in the country, but we don’t do it for the tech industry.”

He points to Australia and the United Kingdom as two jurisdictions that have successfully introduced online child safety legislation, which include adding age verification measures on social media, filtering out harmful content and implementing more parental controls.

“A lot of these companies, if pressed, will make changes. We’ve seen that in Australia,” says Mr. Marcoux. “We don’t have to reinvent the wheel in Canada. There’s a lot of success stories across the world that we can follow.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe