
Signal president Meredith Whittaker.Illustration by Supplied
On Feb. 10, an 18-year-old opened fire at a high school in Tumbler Ridge, B.C., killing eight people before turning a gun on herself. In the weeks that followed, OpenAI admitted that the perpetrator had been discussing the attack with ChatGPT, and that the company had chosen not to alert authorities. In the aftermath of one of the deadliest shootings in our country’s history, many Canadians are asking: Why not?
It’s a reasonable question. But the idea that artificial intelligence companies should automatically report violent conversations to police is more complicated than it sounds.
Machines Like Us host Taylor Owen explores this tension with Meredith Whittaker, president of Signal, an encrypted messaging platform that doesn’t collect any data at all and is regarded as one of the most private messaging apps available.
This is an excerpt of that conversation, from the latest episode of Machines Like Us.
Taylor Owen: Last month there was a school shooting in Tumbler Ridge, B.C. It turned out that the shooter had been talking to ChatGPT about the shooting, but OpenAI chose not to inform law enforcement. The conversation in Canada has gone immediately to: There should be mandatory flagging of these kinds of conversations. What do you make of that?
Meredith Whittaker: We often get fixated on technical solutions, hoping that they will cure really gnarly social problems. But if you zoom out, you recognize that a lack of data isn’t often the problem. It’s a lack of resources to pursue those leads.
Owen: What are some of the risks of mandatory flagging?
Whittaker: The danger is simply that the definition of what is – and is not – permissible shifts. We have seen abuse of surveillance powers throughout history, but we have never seen surveillance powers that are as pervasive as what large tech companies have today.
We need to recognize that scanning private messages for criminal behaviour could begin to extend into scanning private messages for dissent. We know that in the United States, administrative subpoenas have been used to ask the tech companies to identify people who have exercised their First Amendment rights to criticize the government.
We need to understand that a world in which we can’t communicate freely is a world in which we can’t live beautiful, intimate, open lives.
Owen: Signal is encrypted. And OpenAI chats, for example, are not. Should we even think about something like ChatGPT as a private space?
Whittaker: I don’t think it’s reasonable to expect privacy. They’re not built for privacy. OpenAI would not be able to roll out advertisements if these were truly private. And I think it is concerning that the simulation of a sentient and patient interlocutor lulls us into treating this as an intimate conversation. But on the other end is not a loving and sentient interlocutor; it’s a large company that is participating in the core business model of the tech industry, which is collecting as much data as possible and monetizing that.
Owen: AI is evolving really quickly. We’ve gone from pretty basic chatbots to AI agents that are out doing things in the world. Does anything concern you about that new capability?
Whittaker: Yes. The two key features of what we’re calling AI agents is that they have as much access to your data as possible – and then they have the capacity to act independently. An agent that checks in every three seconds is not actually that useful. So, here’s an example: You have an agent running on your device and you say, “Okay, agent, I would like you to plan a birthday party for me and invite my four closest friends.” In order to do that, the agent would need access to your browser. It would need access to your credit card. It would need access to your calendar. It would need access to your contact list. And in this case, if you’re a dedicated Signal user, it would need access to your Signal messages.
Owen: That feels like a real vulnerability to Signal’s encryption.
Whittaker: That undermines our ability to protect privacy at the application level, because the barrier for hackers and hostile nation states is no longer Signal’s gold standard encryption. The vulnerability is some agentic system that is built in an incredibly insecure way.
Owen: I want to ask you about something else you’ve been very involved in, which is the movement to protect kids online. I’ve never seen a policy break through in the way that social media bans have. What is your reaction to those efforts?
Whittaker: I do think they are trying to address a real problem. What we see at the level of implementation, though, is not pretty. Age verification systems don’t work that well. And they are ultimately creating an identity layer that tracks people across their use of the internet. And I’m concerned that under the classic guise of protecting children, what we’re seeing is increased surveillance at a time where that is being weaponized in ways that we should be pretty alarmed about.
Owen: But how do you deal with the reality that they just have massive public support? Here, 90 per cent of Canadians support a social media ban.
Whittaker: I don’t want to say they’re not desired. There’s a problem that people identify and it’s that our kids seem enthralled. Our kids seem disconnected. No one feels good after being on social media. So, how do we deal with that?
To me, there’s a bigger pathology in the business model. It is very dangerous for a scant handful of companies to have such pervasive control over our information environment. Any lever that is actually going to make a real difference would have to address that.
Owen: It feels like we live in a world where we’re often opting into surveillance. A third of people own a smart speaker. One in five has a doorbell camera. I’ve heard you say we’ve made privacy into this technocratic concept that makes it feel like we don’t have a stake in it. When you see people opting into these invasive technologies, how do you convince them they do have a stake in that choice?
Whittaker: I think they do have a stake in this, but I don’t think it’s always a choice. And when we frame this as an individual choice, it’s almost like the way that the fossil fuel industry invented the concept of the carbon footprint. When actually the culprit is a systemic issue. Similarly, in privacy, when we have to give up a bunch of data to apply for a job … These are not really choices. These are conditions of participating in life.
Owen: You’ve said we should have a love-centric view of privacy – what does that mean?
Whittaker: Privacy as a conversation has been bleached and deracinated to the point where we’re talking about data flows, not the fundamental value of having a rich life. What I care about is the ability to have a full life. I care about human thriving. Privacy is something that protects our ability to think and relate to each other. And the argument is that privacy needs to be preserved before data is created, because it’s actually the act of defining you that violates your privacy.
AI tools assisted with condensing the original podcast transcript, which was then reviewed and edited by the Machines Like Us team.