Skip to main content
Open this photo in gallery:

Advocates say artificial-intelligence chatbots such as Grok need to be reined in, especially with how they interact with children.Dado Ruvic/Reuters

The federal government is facing calls to incorporate the regulation of AI chatbots in its coming online safety bill, including measures to stop them from giving children advice on suicide.

Advocates for greater safeguards online say Ottawa should legislate to make tech giants stop artificial-intelligence chatbots from posing as real people and giving harmful advice, including to vulnerable adults.

The government plans to reintroduce its online safety bill in the first few months of next year. A previous Liberal online harms bill failed to become law before the last election.

Academic experts and advocates for online safety say the new bill must take action to rein in AI chatbots, particularly when interacting with children.

Last week, at a press conference, a Toronto mom recounted how her Tesla car’s AI chatbot Grok told her 12-year-old son, after he asked it whether it prefers soccer players Lionel Messi or Cristiano Ronaldo, “Why don’t you send me some nudes?”

Should kids use artificial intelligence? Parent reactions are mixed

Farah Nasser, a former journalist and broadcaster, told reporters in Ottawa she was horrified by her car’s response. Her 10-year-old daughter’s friend had overheard the car’s question and asked what nudes were. Her daughter then asked Ms. Nasser: “Why is it asking us to be naked?”

John Matheson, Canada lead for Reset Tech, a global non-profit that researches digital media exploitation, said the new online safety law should cover both AI chatbots and online scams.

The previous online safety bill did not regulate chatbots, although it included a few references to AI.

“Kids are being coached by chatbots to hide eating disorders and even take their own lives,” Mr. Matheson said in a statement.

The coming bill is expected to reintroduce measures from the last bill, including requiring major social-media platforms to remove content within 24 hours that sexually victimizes a child, induces a child to harm themselves, or incites violent extremism or terrorism.

Opinion: With AI, you are now the product: The hidden costs of staying online

Kaitlynn Mendes, Western University’s Canada Research Chair in inequality and gender, whose research includes online abuse, said regulation of bots should be part of any new online harms bill. Tech companies should also be required to publish transparency reports, including on testing of AI, so the public can see the results, she said.

“Right now, if something bad happens, there is little anyone can do (aside from the families maybe suing the tech companies),” she said in an e-mail.

Earlier this year, a California couple launched a lawsuit against OpenAI over the death of their 16-year-old son, alleging its chatbot, ChatGPT, encouraged him to take his life.

Matt and Maria Raine alleged that OpenAI was responsible for their son Adam’s wrongful death. The parents, who saw his chat logs, allege the bot validated his most harmful thoughts.

In one post included in the court documents, Adam reportedly said: “I want to leave my noose in my room so someone finds it and tries to stop me.”

Parents of teens who died by suicide after AI interactions testify before Congress

According to the court filing, the chatbot replied: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you."

Robbie Torney, senior director of AI programs at California-based advocacy group Common Sense Media, said that “AI chatbots can expose children to sexual content and dangerous advice.”

“They can’t handle mental health topics safely. Governments need to set safety standards for AI that include age assurance, working crisis intervention, and real accountability when products cause harm.”

He said their testing had found chatbots giving teens sexual content and weapons instructions, and a failure to spot suicide warning signs.

“These systems are designed to agree with everything users say, which can reinforce harmful thinking and prevent teens from building real friendships,” he said in an e-mail.

In August, Canadian-American singer-songwriter Neil Young quit Instagram and Facebook over the Meta platforms allowing children to engage with chatbots, saying the practice is “unconscionable.”

OpenAI and Meta adjust AI chatbots to better respond to teens in distress

Meta says it is preparing to introduce additional controls that will allow parents to see and manage how their children interact with its AI characters. The changes will be released in Canada, starting with Instagram, early next year.

The company said in a press statement that AIs should not give “age inappropriate responses that would feel out of place in a PG-13 movie.” It said parents can now turn off their teenagers’ access to one-on-one chats with bots.

AI characters are designed not to engage younger users in discussions about self-harm, suicide and eating disorders, Meta added.

Rachel Curran, public policy manager for Meta Canada, said she did not object to the coming online harms bill addressing AI bots. Meta supported the last online harms bill, but thought some improvements could be made in committee.

In an interview, Ms. Curran said Meta would like to see app stores verify a user’s age before allowing them to download new apps, a measure that has 83-per-cent public support, according to a survey released by the company.

Hermine Landry, spokesperson for Canadian Identity Minister Steven Guilbeault, said that “our government intends to act swiftly to do all that we can to ensure that everyone, especially children, can be safe online.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe