Skip to main content
managing

Interested in more careers-related content? Check out our new weekly Work Life newsletter. Sent every Monday afternoon.

One of the biggest problems organizations face in implementing artificial intelligence initiatives is that top executives and middle managers aren’t on the same page.

“Until executive leaders understand this divide and take ownership of closing it, no level of investment or ambition at the top is likely to produce the transformation they’re promising,” Jeremy Korst, founder of Mindspan Labs, an AI-transformation consultancy, and Stefano Puntoni and Prasanna Tambe, professors at the Wharton School, write in Harvard Business Review.

That conclusion follows three years of tracking AI adoption in which it was evident results have not materialized at a scale commensurate with investment. Top executives and middle managers are starkly divided on whether the return on investment is worthy and how their pace of change compares to other organizations. Middle managers, while still excited and optimistic about AI prospects, are 64 per cent more likely than their senior colleagues to describe themselves as “cautious” about progress.

“Senior leaders tend to use AI for high-level synthesis, strategic drafting and decision support, tasks where the technology performs well, so the current capabilities tend to benefit their work. Middle managers, on the other hand, are tasked to deploy the technology in the messier territory of day-to-day operations: Workflows built over years, teams with uneven technical comfort, output that has to be consistently right, not just fast. When the tool works, both groups understand and reap the benefits. When it fails, typically only one of them has to cope with the aftermath,” they write.

That’s the middle managers, who already feel overworked and under-resourced. The two groups also have different time horizons. Executives are rewarded for vision, looking out and imagining what’s possible. Middle managers are rewarded for execution – making things work today. They confront AI’s current limitations: Hallucinations, the friction of trying to integrate it with other technology and people, and the day-to-day workflow disruption.

“Whatever executives aspire to, it is middle managers who will actually make it real – or not,” the researchers warn.

They stress the solution isn’t more technology investment or bolder vision. Instead, executives must turn their attention inward to the managers carrying the burden of this transformation and equip them with sufficient support, clarity and bandwidth.

Find out if managers and teams understand and embrace your AI vision. Determine if there are pockets within your organization that are more ready or more resistant to these changes. Co-create the future playbook for bringing AI on board; don’t hand it down.

“Bring managers into the roadmap discussions before decisions are made, not after. The goal is to ensure all levels of leadership are aligned and enlisted on a common path – not just aware or engaged,” they say.

It’s also vital to reduce the current load before adding to it. Measure readiness and ensure sufficient capacity to accommodate the changes you desire. That may require investment in rethinking work flows and reskilling people before piling on more technology.

You may also want to consider in your diagnosis some suggested questions from Ghassan Karian, chairman of the Ipsos Karian and Box consultancy, who believes too many companies are blindly following the AI crowd, moving too quickly and with too little thought. “Running before walking is how many end up with expensive tools solving trivial problems, or powerful tools bolted onto broken processes,” he writes on his Substack.

So ask:

  • What is the actual business problem you’re trying to fix?: Usually, the question executives ask is “How can we use AI?” Instead, consider, “Where are we slow, wasteful, error-prone or stuck?” AI amplifies what you have and must be congruent with it. He notes that good starting points are boring ones such as where do decisions bottleneck; where do people spend time on low-value work; and where do customers experience friction?
  • What work do we want humans to stop doing?: Most AI strategies talk about adding more while very few specify subtraction. “If AI is supposed to help, what exactly should it replace? Drafting? Summarizing? Triage? Forecasting? Quality checks? If the answer is ‘everything, eventually,’ you’re not doing strategy – you’re doing science fiction,” he says.
  • Do we trust the data this will be trained on?: He notes that AI doesn’t hallucinate out of nowhere. It reflects the quality of what you feed it. So ask uncomfortable questions about what your data really represents – how it was created, filtered and rewarded.
  • Who carries responsibility when AI goes wrong?: If, for example, an AI system rejects a job candidate or flags a customer as risky, who owns the outcome? If no human is accountable, he says you’re not innovating but are “responsibility laundering.”

He also urges you to think through what success for your efforts look like in human terms rather than technical metrics about speed, accuracy and cost effectiveness. Will people make better decisions? Will customers experience less friction? Will teams spend more time on judgement, not administration? Do errors decrease in ways that matter?

AI implementation is hard. You need to get your top executives and middle managers aligned, focused on the realities of your organization.

Cannonballs

  • Most business challenges are like sciatica, observes entrepreneur Seth Godin. With sciatica, sometimes the pain is felt in the thighs or ankles, not the back. Similarly, you might believe the problem you face at work is your customer’s attitude or how busy a location is but it’s probably a different problem, something more systemic, well-concealed. Find the system, he says, and you are halfway to fixing it.
  • New research from the advocacy group Lean In finds women receive less recognition for AI use at work. Among those who have used AI on the job, men are 27 per cent more likely to have been praised for doing so. Men are also 23 per cent more likely than women to be encouraged by their managers to use AI.
  • “Talk either leads to action or substitutes for action,” says Ottawa thought leader Shane Parrish.

Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe