Interested in more careers-related content? Check out our new weekly Work Life newsletter. Sent every Monday afternoon.
Beware of AI slop. It’s causing frustration among your employees and needless friction in getting things done effectively.
The folks at media and insights company Charter, who have been keeping a close eye on artificial intelligence in their publications, first highlighted the phenomenon in May.
“We’ve noticed a growing trend in professional circles of people sending ideas or documents for review with notes like ‘here are some ideas from ChatGPT’ or ‘here’s a draft I wrote with the help of ChatGPT.’ What follows is often unedited – or lightly edited – text that’s overly generic or missing important context,” editor Kevin Delaney wrote. “That leaves it up to the recipient to improve the output, effectively making them ChatGPT’s editor, a role they didn’t sign up for.”
Now, a survey is out that attempts to put some numbers to the threat in the workplace. Of 1,150 U.S.-based full-time employees across industries, 40 per cent report having received workslop in the last month. Employees who have encountered workslop estimate that an average of 15.4 per cent of the content they receive at work qualifies. The phenomenon occurs mostly between peers but can also work its way up or down the hierarchy between bosses and subordinates.
“Workslop occurs across industries, but we found that professional services and technology are disproportionately impacted,” a team from Better Up Labs and the Stanford Social Media Lab write in Harvard Business Review.
The term “AI Slop” started to appear in Google searches in the middle of last year. But the Better Up/Stanford team – Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano and Jeffrey T. Hancock – focused on the workplace so prefer the term workslop, which they define as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
They tell us to think of it as a cognitive tax. Rather than outsourcing mental work to a machine, which we think is happening when AI is used at work, the reverse is actually happening, the machine offloading cognitive work to a human being.
“When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context. A cascade of effortful and complex decision-making processes may follow, including rework and uncomfortable exchanges with colleagues,” they write.
Employees in the survey reported spending an average of one hour and 56 minutes dealing with each instance of workslop. When asked how it feels to receive workslop, 53 per cent report being annoyed, 38 per cent confused and 22 per cent offended.
“The most alarming cost may be interpersonal. Low effort, unhelpful AI-generated work is having a significant impact on collaboration at work. Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable and reliable than they did before receiving the output,” they note.
The laziness involved starts at the top, where the team suggests organizational leaders who advocate for AI everywhere all the time have modelled a lack of discernment in how to apply the technology. In essence, they are passing the buck to employees to figure out where and how to use AI.
Those employees divide into two groups, or mindsets. One group are “pilots,” who are more likely to use AI to enhance their own creativity. They use AI purposefully to achieve their goals. The other group are “passengers,” who are much more likely to use AI in order to avoid doing work than pilots.
The team argues organizational leaders need to frame AI as a collaborative tool, not a shortcut. Leaders need to be clearer about the outcomes and specific usages they are seeking as collaborative dynamics change. They also need to uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.
The dangers potentially escalate when artificial intelligence agents are in place, the software autonomously performing tasks, using its version of perception, reasoning, planning and memory. Reviewing the first year of agentic development recently, a group of McKinsey & Co. consultants wrote that “one of the most common pitfalls teams encounter when deploying AI agents is agentic systems that seem impressive in demos but frustrate users who are actually responsible for the work.” Lareina Yee, Michael Chui, Roger Roberts and Stephen Xu noted that users quickly lose trust in the agents and adoption levels are poor. Any efficiency gains achieved through automation can easily be offset by a loss in trust or a decline in quality.
They advise companies to invest heavily in agent development, just like employee development. They cite a business leader who told them: “Onboarding agents is more like hiring a new employee versus deploying software.” AI Agents should be given clear job descriptions and given continual feedback so they become more effective and improve regularly.
“When codifying practices, it’s important to focus on what separates top performers from the rest. For sales reps, this might include how they drive the conversation, handle objections and match the customer’s style,” they write. “Crucially, experts should stay involved to test agents’ performance over time; there can be no ‘launch and leave’ in this arena.”
The magic of AI may be less magical than we thought.
Cannonballs
- To those who believe hope is not a strategy to follow, leadership development consultant Julie Winkle Giulioni counters with these statistics from a survey that found employees with the highest levels of hope are: 74 per cent less likely to suffer from burnout or anxiety, 75 per cent less likely to suffer from depression, 33 per cent less likely to endorse quiet quitting and 49 per cent less likely to consider quitting. A hope-based strategy, she adds, is about helping employees see possibilities, navigate obstacles and believe in their own capacity to succeed.
- Heather Perry, chief executive officer of California’s Klatch Coffee, says her hiring strategy is to find people who want to own something – and let them. She seeks store managers with pent-up energy who have ideas they are dying to try that they can add to the basic framework the chain provides of menus, pricing and culture.
- Entrepreneur Seth Godin dismisses the famed words about never giving in from Winston Churchill. It was correct then but in most cases he argues “it means we spend an enormous amount of time in senseless battles with senseless folks who are also following this advice.” Beyond issues of honour and good sense, he urges you to often give in.
Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.