Skip to main content
opinion
Open this photo in gallery:

Left to right: Priscilla Chan, Meta CEO Mark Zuckerberg, Lauren Sanchez, businessman Jeff Bezos, Sundar Pichai, and businessman Elon Musk, among other dignitaries, attend Donald Trump's inauguration as the next President of the United States in the rotunda of the United States Capitol in Washington, D.C. on Jan. 20.SHAWN THEW/Reuters

David Weitzner is the author of Thinking Like a Human: The Power of Your Mind in the Age of AI. He’s an associate professor of management at York University, and writes the Managing with Meaning blog for Psychology Today.

Keen observers noticed at the inauguration of Donald Trump that the best seats in the house were given to the tech oligarchs. Seated in the front, ahead of even the President’s cabinet picks, were Alphabet’s CEO Sundar Pichai, Meta’s Mark Zuckerberg, X’s Elon Musk, Amazon’s Jeff Bezos, and Apple’s Tim Cook. Mr. Trump may have been taking a page out of UFC’s television playbook (whose CEO, Dana White, was also present), ensuring cameras capture celebrities in the crowd as a signal to viewers that what they are witnessing is a classier event than the bloody street fight it first appears to be.

I call this group “algorithmic supremacists.” They want to use algorithmic thinking to change and control everything, from elections to income disbursement to bodily autonomy. And their access to the seats of power should make you nervous.

Optimists at the time alluded to the possibility of these tech titans ushering in a new era of technocracy, but it hasn’t taken long to dispel that notion. Mr. Musk’s short tenure heading DOGE has shown that he is no “technocrat,” a label for those governing with specialized scientific or engineering expertise. Even the recent global tariff announcements were built on a simplistic back-of-envelope calculation, not the deeper mathematical analysis of tariff rates and non-tariff barriers that a technocrat would engage in to shape such a disruptive economic policy.

So they may not be technocrats, but that doesn’t mean their intentions are nefarious, right? Some may see hyperbole in the “algorithmic supremacist” label. After all, algorithms are nothing more than a step-by-step process for reasoning through a problem using an if/then logic. Although in the modern age the moniker has been popularized by computer scientists, algorithms are not only used in technology and programming. Humans have long used this tool as a mode of mental processing.

We all use algorithmic thinking constantly in our daily lives. For example, when approaching an intersection with a traffic light, we employ a very simple algorithmic calculation to determine what to do next. If the light is green, then we proceed; if the light is red, then we stop. More complicated algorithms are required to complete more complex tasks. Like the algorithm for crafting McDonald’s french fries, which would include acquiring 19 ingredients and subjecting them to a process of blanching, partial frying, flash freezing and then refrying.

The step-by-step breakdown is the defining characteristic of an algorithm. These sequences create the order our brains need to facilitate the mental processing of information too complicated to synthesize instantaneously. Clearly, there is nothing inherently problematic in preferring algorithmic/rule-based decision making. Then what’s the issue?

The problem is that algorithmic supremacists hold some very specific, somewhat peculiar, positions. They have some non-mainstream ideas on how intelligence is to be defined, what sorts of intelligence humanity should privilege, and who should wield power in support of allowing intelligence to flourish. For example, many algorithmic supremacists, like Google’s Ray Kurzweil and Larry Page, as well as Jeff Bezos and Elon Musk, embrace transhumanism, the belief that humanity’s future is to evolve beyond our current biological limitations and ultimately fuse into a new state of digital existence.

According to one of the movements most prominent philosophers, Nick Bostrom, transhumanists view humanity as a “work-in-progress, a half-baked beginning that we can learn to remold in desirable ways.” Mr. Musk has said that his goal with Neuralink is “to achieve a symbiosis with artificial intelligence” enabling a future of humans “merging with AI.” To transhumanists, what is best for humanity is working to access a future outside our bodies, minds, and existing notions of the self, operating on a platform controlled by Big Tech.

Many algorithmic supremacists, including the now imprisoned founder of FTX, Sam Bankman-Fried, also endorse effective altruism (EA), a related ideology that is equally anti-human. In the movement’s own words: “Everyone wants to do good, but many ways of doing good are ineffective.” The polished public-facing mouthpieces for EA carefully choose words that seem reasonable enough. But a closer look reveals cause for worry.

Effective altruists celebrate moral value number crunching as an almost sacred activity – the more complex the algorithm, the higher its worth. They seek to maximize the good by only supporting causes that can be calculated as offering the largest return on investment, which is what they mean by the seemingly innocuous observation about effectiveness. Only an algorithmic supremacist would be so confident in their ability to accurately predict and capture the myriads of future variables that would be necessary for such a calculation.

Another way algorithmic supremacists align nicely with the current U.S. administration is in their shared vision for a future where elections have become a relic of the past. While campaigning last year Mr. Trump promised his supporters that if they vote for him now, they won’t have to do so again in four years as “it will be fixed.” While there are several ways to achieve this “fix,” algorithmic supremacists have proposed a technological one that may be cleaner than classic authoritarianism.

WorldQuant’s CEO Igor Tulchinsky believes we are in an Age of Prediction, where future tech will even be able to accurately predict how we will vote, raising questions about the need for elections. He recognizes the repressive nature of absolute prediction, given that, by definition, this type of algorithm will already know with certainty all the choices we’d make in any given scenario. But, as Mr. Tulchinsky poetically observes, “the fire of prediction cannot be snuffed out now, nor should it be. The risk of being challenged and charred by our algorithms, while dangerous, is worth the illuminated future.”

Do a quick search on YouTube to easily find a video of the World Economic Forum’s former chairman Klaus Schwab similarly musing about the potential of predictive algorithms. He, like Mr. Tulchinsky, wonders about what life will look like in the day when AI’s predictive capacity would be so unassailably perfect that we will no longer need to hold elections. The AI will “know” who each of us would vote for and, thus, know with certainty who would ultimately win. Candidates would simply ascend to, or, more likely, retain power on the AI’s say-so. The “fix” would be in.

OK. So algorithmic supremacists have a pretty bleak vision for the future. But how much damage can they do over the next four years? Well, to my mind, their understanding of the present is equally disturbing. Algorithmic supremacists show a disdain for the inefficient ways humans read, write, draw, compose, and create, wanting us to outsource those activities to machines that can spit out mathematical variations of what uncredited human artists produced with heart and soul.

OpenAI’s CEO Sam Altman is certain that our algorithmic-based operating system is not materially different from his company’s AI. He posted on X that “Language models just being programmed to try to predict the next word is true, but it’s not the dunk some people think it is. Animals, including us, are just programmed to try to survive and reproduce, and yet amazingly complex and beautiful stuff comes from it.”

Mr. Altman is offering an incredibly reductionist view of humanity. To algorithmic supremacists, art, language and society somehow emerged from the simple algorithmic programming of “survive and reproduce.” But “survive and reproduce” does not explain the things feeling humans are most proud of. Sure, we have a biologically programmed urge to reproduce, but that doesn’t explain the choices of parents who not only stick around to see their children grow but also make sacrifices to raise their kids to be better than they are in every notable way. “Survive and reproduce” doesn’t explain all the communal structures built to make people feel less alone.

Algorithmic supremacists really don’t seem to care that much for their fellow human beings. The age of AI has thus far shown itself to be characterized by a general trend of downward mobility. And the Canadian Government has released a report worrying that upward social mobility may never recover in the future. That’s why the most noted policy papers emerging from this camp argue that politicians need to institute a universal basic income (UBI). As their tech takes away jobs from creative humans, they want governments to step in and be the primary providers of income.

Their stance on UBI is rooted completely in self-interest. Tech leaders envision a world where their machines will be doing the work people used to do. All profitable, value-creating labour will be theirs, and theirs alone, to exploit. Consequently, they ask governments to ensure the displaced regular folks do not starve as we evolve into digital beings that will no longer need to eat.

Here’s the thing, though. We don’t need to think like algorithmic supremacists. When an algorithm tells us to deny care, deny hope, deny giving a person who stands before us a chance to be heard … we can still think like a human.

Follow related authors and topics

Interact with The Globe