Governments must go after social media platforms like X that allow users to create lewd pictures because these companies won't do it on their own.Dado Ruvic/Reuters
That people would use artificial intelligence tools to remove clothes from pictures of women and girls was entirely foreseeable. That’s why the companies behind some of these products installed safeguards to reduce the risk.
However, Elon Musk doesn’t like to restrict Grok, the AI tool on his social media company X. Users soon figured out that Grok would adapt a photo by responding to prompts such as, for example, “put her in a string bikini” and “make her turn around and bend over.” The result, starting last month, was a flood of sexualized images on X.
This is disgusting and degrading behaviour by these users. And it demands a strong response from governments worldwide to hold accountable the company that made it possible.
Canada’s privacy watchdog expands probe into X over Grok’s sexualized deepfakes
Such images carry real harm. They harass and humiliate the people depicted and act to make social media an even more unwelcome place, especially for women. Some Grok users have responded to women who criticized the crude images by asking the bot to put those critics in lewd scenarios.
This sort of behaviour can be attributed in part to the vile nature of some internet users. The online bulletin board 4chan became notorious for racism and misogyny. Images Grok has been asked to generate include putting the body of Renee Nicole Macklin Good, the mother of three shot by an ICE agent in Minneapolis, in a bikini.
But users seeking to shock and offend are only part of the problem. They are able to generate these images and, crucially, share them with the public because Grok allows it. And Grok does what the people who control its code want it to do. Although AI companies try to create the impression of sentience for their bots, the technology is a long way from that. X, and ultimately Mr. Musk, is responsible for Grok’s outputs.
Governments around the world are taking different approaches to the problem.
Indonesia and Malaysia are among the countries that have moved to ban Grok. The European Union is investigating X, as is the United Kingdom. In Canada, Evan Solomon, the Minister of Artificial Intelligence and Digital Innovation, says there are no plans to ban X. But the privacy commissioner is investigating whether the company is following privacy law.
Ottawa not banning X despite sexual deepfake controversy, AI minister says
There are concrete things Ottawa can do. One step for the federal government would be to amend Bill C-16, which targets people who distribute intimate images of people without their consent. The bill needs to be broadened to cover the sort of images being generated on X. And it must also target platforms, a provision that had been included in Bill C-63, the online harms act that died after Justin Trudeau resigned, that carry them.
Going after the platform for distribution is important because the consequence-free dissemination of the images is the core problem. While the misogyny behind wanting to generate these images is certainly abhorrent, the real harm is that they are being published on X.
In Canada, online communications companies are treated as intermediaries rather than publishers. That rule has its roots in a desire decades ago to foster online growth. Later, social media was seen as a social good also worth protecting. Currently, companies have to remove some types of material posted by their users, once it is flagged.
In the context of Grok, though, that leaves women in the position of having to be aware of the misuse of their image and puts the onus on them to request it be removed. This is unacceptable.
Making social media companies responsible for content would curtail the free-wheeling nature of these platforms. But viewing them as a social good worth protecting is anachronistic in an era of AI that can digitally undress women.
X says it takes action to remove “high-priority” content, such as child pornography. On Wednesday, the company also announced tweaks to Grok’s code that it said would block users from generating lewd images from photos of real people, but only in jurisdictions where it is illegal to do so.
However, recall that Mr. Musk initially responded to the controversy over Grok-generated images by chortling about a doctored picture that put him in a bikini. The company also turned its image-generating feature into a perk available only to paid subscribers, making the ability to harass women a revenue driver.
These are not the actions of a company that will rein itself in voluntarily. Governments must force it to do so.