Skip to main content
Open this photo in gallery:

Britain's Prime Minister Rishi Sunak delivers a speech on AI at Royal Society, Carlton House Terrace, London, on Oct. 26.Peter Nicholls/The Associated Press

At first glance, the photograph looked like a massive show of support for Palestinians just days after the war between Israel and Hamas broke out on Oct. 7.

The image began circulating on social media on Oct. 18 and showed a giant Palestinian flag in Spain’s Civitas Metropolitan Stadium, home of Atletico Madrid soccer team. The Arabic caption said: “Atletico Madrid fans support Palestine” and it was seen more than 1.7 million times on X.

Atletico officials said the photograph was not legitimate and computer experts quickly noticed several defects that indicated it had been generated by artificial intelligence.

The fake picture was just the latest example of the risks posed by AI as machine learning becomes ever more powerful and large language models, LLMs, grow even more capable of mimicking human intelligence.

On Wednesday, representatives from government, industry and academia will gather at Britain’s Bletchley Park, the home of the Second World War code breakers, for the world’s first AI Safety Summit, where they will discuss the opportunities and risks of AI, and what steps can be taken to address the challenges.

“There is no question that AI can and will transform the world for the better,” British Prime Minister Rishi Sunak said last week. “But we cannot harness its benefits without also tackling the risks.”

Mr. Sunak is eager to position the U.K. as a global AI hub and use the summit to showcase the country’s openness toward investment in the technology. He has already announced the establishment of an AI safety institute and he wants to create an expert monitoring group similar to the Intergovernmental Panel on Climate Change, or IPCC. The government is also investing billions in super computers and quantum computers, which are used to solve highly complex problems.

Few world leaders are expected to show up at the summit. Canada will be represented by Innovation, Science and Industry Minister François-Philippe Champagne. The most high-profile participant will be tech billionaire Elon Musk, who will hold a question-and-answer session with Mr. Sunak on X.

There is concern the focus of the meeting is too narrow and delegates will fail to address the immediate challenges of AI, such as disinformation, labour disruption and privacy issues. Mr. Sunak said in his speech that the summit will concentrate on the potential dangers of frontier AI, the most advanced systems that could theoretically be used to create chemical or biological weapons.

“Whilst the Prime Minister’s speech talks about risks from terrorism, chemical weapons and even the risk of human extinction, these emotive claims only serve to distract us from the AI risks that are happening right now,” said Mike Cook, a senior lecturer in computer science at King’s College London.

Michelle Donelan, Britain’s Secretary of State for Science, Innovation and Technology, said the goal of the summit is to better define the risks and opportunities of frontier AI. “We see this as the beginning of a conversation,” she said in a recent interview. “We don’t believe that a one-size-fits all policy that is mandated from the top is the right approach.”

That’s in contrast to the approach taken by the European Union and the United States, which have begun work on more comprehensive regulations. On Monday, U.S. President Joe Biden signed a lengthy executive order that will require developers to share safety test results and calls for the creation of standardized tools to label AI-generated content.

AI expert Yoshua Bengio, a computer science professor Université de Montréal, said he hopes the summit will address the need for governments to act quickly

The AI systems “that are currently being trained are 100 times bigger than the ones we currently have access to and nobody knows how powerful they will be,” he said in an interview.

Dr. Bengio believes the world can’t wait for a global treaty on AI safety and governments must move unilaterally first and then work together on a global regulatory framework.

He was among two dozen leading AI researchers who recently signed a paper that included a number of proposals to manage the risks of AI. The group called on major tech companies to allocate at least one-third of their AI research and development budgets to ensure safety and the ethical use of the technology.

Martin Kon, the president and chief operating officer of Toronto-based AI specialists Cohere Inc., said he welcomed discussions about increased oversight. “We believe that guardrails are important,” Mr. Kon said in an interview. “Whether that’s policy, whether it’s legislation, whether it’s regulation.”

But he’s also worried the summit isn’t concentrating enough on current challenges. “There’s maybe a little bit of distraction from some of the talk of these existential risks,” he said. “I think some of it might be a bit of imagination from us all watching too many sci-fi movies.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe