
As companies integrate AI into their systems, experts highlight the growing need for robust cybersecurity measures to protect against rapidly evolving threats.Getty Images
When Apple announced its latest operating system, iOS18, at the Worldwide Developers Conference in June, 2024, it included updates to password management and home-screen customization, not to mention a much-maligned update to the Photos app.
It also introduced Apple Intelligence: a new suite of artificial intelligence-powered tools that would do everything from correcting grammar to summarizing urgent e-mails. It even provided users free access to ChatGPT, thanks to the company’s partnership with OpenAI.
Buried beneath those shiny new features was a monumental shift in how Apple handles data – one that could change the landscape of cybersecurity. The company has historically championed client-side AI processing (meaning the task happens on the device itself), largely for its privacy benefits, but these new features require significant data processing power. It explains why the company is now moving to server-side AI processing (meaning the task happens centrally in a network), an approach that maximizes your phone’s computational capabilities, but could potentially come with cybersecurity concerns.
“From a cybersecurity and privacy standpoint, both server-side and client-side processing have pros and cons,” notes Jeff Schwartzentruber, senior machine-learning scientist at eSentire and senior adviser on AI to the Rogers Cybersecure Catalyst. “From a server-side perspective, a key benefit is centralized security and privacy controls. Organizations handling sensitive or personal data must meet certain regulatory standards, which typically exceed the average consumer’s knowledge, budget and capabilities.
“[This results] in a stronger overall security posture. However, a major drawback of a centralized approach is that it creates a single point of failure – a much more attractive target for cyberattacks.”
Apple is just one company out of thousands. According to Statistics Canada, by the first three months of 2024, one in seven Canadian businesses were using or had plans to use generative AI, with the greatest uptake in information and cultural industries; professional, scientific and technical services; and finance and insurance.
Privacy becomes an even more pressing concern as businesses in every sector expand their AI-supported offerings.
“Any integration of a third-party technology like AI introduces supply chain risks,” says Robert D. Stewart, founder and head of strategic threat intelligence at Toronto cybersecurity firm White Tuque. “In many ways, it comes down to trust and reputation. The security of the AI depends on the trustworthiness and diligence of the vendor to secure your data and only use it as intended, and their internal commitments to cybersecurity.”
The risks are myriad: Many of the qualities that make AI attractive to businesses also make it useful for hackers, Mr. Stewart notes.
“AI speed and efficiency gives fraudsters and hackers [the tools] to launch attacks they already know work. Attackers are now using AI to rapidly create tool kits to exploit vulnerabilities as soon as patches are released. With AI’s ability to analyze patch notes, reverse-engineer fixes, and identify exploitable code, hackers can develop and deploy sophisticated attack tools within hours instead of days or weeks.”
Cybersecurity veteran Tony Anscombe, chief security evangelist for digital security software company ESET, points to the problem of ‘AI poisoning,’ which describes a type of cyberattack that targets data on which AI models are trained.
“Corrupting the data set with misleading or inaccurate information [means] the model will generate inaccurate or misleading responses,” he says, noting this type of attack is particularly applicable to smaller companies that want to take advantage of emerging AI tools, but they may lack robust cybersecurity protocols. (The risks go beyond privacy. Mr. Stewart also points to the widespread transmission of misinformation and disinformation, which AI can help create, spread and normalize.)
It’s the reason cybersecurity attacks using and exploiting AI are becoming more prevalent. Mr. Schwartzentruber says it boils down to three major themes: companies are more focused on innovation and offering their customers new features than on prioritizing security; the technology is both new and quickly evolving, which means companies don’t always fully understand the security risks or implications; and Canada does not have strong regulations around security controls for AI systems.
Consumers must depend on individual companies to protect their data – and those organizations might be relying on AI tools to do that, too.
“We have already seen an emergence of malware that is increasingly using machine learning during the development phase to morph and evade detection, and defence tools using machine learning to detect suspicious behaviours, rather than relying on signatures,” says Laura Payne, White Tuque’s chief executive officer and head of security and consulting. “So, in a sense, the era is already here, but the pace is accelerating and the scope where these tools are used is growing.”
If there’s one point cybersecurity experts agree on, it’s that companies can’t solely rely on AI for this task.
“Cybersecurity needs human expertise to ensure it operates effectively,” Mr. Anscombe says. “An AI-based model can beat a human at chess because the game is rule-based. Cybercriminals do not operate on a rules-based system, and therefore the parameters of operation need oversight to ensure the methods being used by cybercriminals are being captured in the models that the AI system is learning from, while also ensuring that false positives are kept to a minimum.”