It sounded like every journalist’s nightmare. In October, a Polish radio station made international headlines after laying off journalists, only to replace them with AI-generated on-air presenters.
Off Radio Krakow, a branch of Poland’s public broadcasting service, called the move an “experiment” intended to address the station’s low audience numbers, as well as to explore “the opportunities and threats that the development of artificial intelligence brings.” (Disclosure: I used Google Translate to translate that quote from text on the Off Radio Krakow website.)
In short order, audience numbers surged to 8,000 from next to none, but once they tuned in, many didn’t like what they heard. At one point during the “experiment,” the AI-generated host dubbed Emilia Nowak “interviewed” a dead Nobel Prize-winning poet – a segment consisting exclusively of synthesized voices; it was not well-received. And, ultimately, outrage among the ousted journalists and listeners, more than 24,000 of whom signed an online petition to reinstate human presenters, persuaded the station to revert to its human-hosted format.
This will almost certainly not be the media’s last experiment with AI – and in many cases that’s not a bad thing. Reputable news organizations aren’t using AI to replace reporters, editors or photographers, but to support and scale their work.
“That’s a starting point for us: to go back to real hardcore journalism and then see how generative AI models can help us to do that better,” Olle Zachrison, the head of artificial intelligence and news strategy at Swedish Radio, told the Reuters Institute.
For example, an algorithm or AI tool can help journalists comb through data much more quickly than they would be able to manually, and to identify patterns within large swaths of information that reveal an important story. A transcription tool eliminates the need to re-listen to lengthy recorded interviews to type them out.
AI can automatically generate summaries of sports results and corporate earnings; the Associated Press wire service has used this technology since 2014. The Globe also publishes a small number of AI-generated stories that provide information on publicly-traded companies on stock pages, alongside other third-party content and related Globe-written articles. (Scroll down this page to see recent examples.)
If you key a question into The Globe’s Climate Exchange page, its search tool will use AI to match your question to the answers editors have provided in advance. It’s a way to “chat” with Globe journalists at any time of the day or night.
Similarly, the Australian philosopher Peter Singer had developers create a chat bot so he could (virtually) respond to a larger number of his followers’ questions than was possible via e-mail, instructing the developers to avoid providing concrete answers to any questions he, himself, would not answer in concrete terms. (Try asking a question here.) While it isn’t the same as one-on-one exposure, tools like this can allow subject matter experts to engage with news audiences at any time and in any place that’s convenient for readers.
So, we’ve established that artificial intelligence can be used as a newsroom resource. But how can readers tell it’s being used ethically?
The answer can be summarized in a few words. The first is transparency. Reputable news organizations have established policies for working with AI, and publish these for both staff and audiences to consult as needed. You can read The Globe and Mail’s policy here.
Human supervision is the other critical aspect of ethical AI use. Transcription tools are known to “mishear” parts of a recorded interview. AI research tools have been found to “hallucinate,” providing made-up information in their responses. A human being with critical thinking skills must review any AI output to confirm its accuracy.
At worst, unsupervised AI use can damage lives, as the Reuters Institute reported: “Earlier this year, Hong Kong-based BNN Breaking accompanied a story about an unnamed Irish broadcaster’s trial for sexual misconduct with a photograph of a prominent Irish TV and radio host who had nothing to do with the case. The mistake was a result of the use of an AI chatbot to produce the piece.” The person whose photograph was misused is suing BNN Breaking. It became the subject of a New York Times investigation, which found the site was using generative AI to cobble together supposedly original articles. The site closed in April of this year.
At the very least, using AI without guardrails can be reputationally damaging, as Sports Illustrated learned last year after allegedly publishing AI-generated articles provided by a content partner. As The Globe reported last November, the articles bore the hallmarks of machine generation, incorporating bland language and sometimes nonsensical statements, such as, “Volleyballs aren’t as complicated as many people think.”
Although many journalists have expressed fear that generative AI will replace human creators, readers are already noticing and flagging copy they suspect is synthesized. On a few occasions, Globe subscribers have e-mailed me to ask whether an awkward typo was the result of AI copy-editing. For the record: it was not.
And what about the next generation of journalists? I got a sense of the view these digital natives have on artificial intelligence when I was invited to attend an awards presentation for students of Toronto Metropolitan University’s School of Journalism this month.
Addressing the group, the school’s chair, Ravindra Mohabeer, shared that instructors were worried that students would use Chat GPT as a shortcut to completing assignments. Then, he said, the students tried Chat GPT and concluded: “This is really bad.”
The future of journalism appears to be in good hands.
Coming soon: Standards Q&A
If you have questions about The Globe’s journalism – from ethics to story selection to newsroom policies and processes – we’d like to answer as many of them as possible in a coming video series. Submit your questions to tgam.ca/standardseditor.