Article
Workslop Has Always Existed—AI Just Makes It More Visible
The real question isn't whether AI creates bad work. It's whether people have the skills to produce good work with it.
October 23, 2025

AI doesn't create workslop. It reveals who was already producing it and accelerates both brilliance and mediocrity.
The headlines say AI is creating a “workslop” crisis and flooding the world with low-quality work. But while we are currently drowning in sub-par work, here’s what that narrative misses: Sloppy work has always been around. Bad PowerPoints, weak spreadsheets, and hollow documents existed long before ChatGPT. AI is just helping people produce it faster. The difference now is that AI also produces brilliant work faster for those who know how to use it.
AI acts an amplifier. It magnifies what’s already there: vision and skill, or lack thereof. A creative concept that lived in someone’s head for decades can now be created in minutes through tools like Sora (ChatGPT’s video generator from text prompts). Meanwhile, others can use these same tools to flood inboxes with thoughtless content that adds no value. It's the same story across every area where AI is being deployed.
This creates a new divide in the workplace. But it’s not between people who use AI and people who don’t—it’s between people who use AI thoughtfully and people who just press send on whatever it produces. Bridging this gap starts by developing the skills to use AI well.
Vision Makes AI Valuable
So what does using AI well actually look like? The pattern is becoming clear. When someone has a clear purpose and understands how to coach AI, the results can be remarkable. When someone lacks direction and treats AI like a magic button, the output is predictably hollow.
Here's a telling example: Sora limits videos to 8 seconds. That constraint forces people to have a clear point. You can't hide behind volume when you only have 8 seconds to work with. The limitation actually produces better results than unlimited length because it requires thought before execution.
The difference isn't the technology—it’s whether someone has something worth saying. Quality comes from having a clear purpose, not from avoiding AI. A brilliant colleague produces insightful work. Someone who only speaks in clichés produces noise. That's true whether they're using AI or not.
How to Spot the Difference Between Volume and Value
Slop doesn't bring value—it brings volume. Value can be 3 words or 3,000 words, but they're words that matter.
Think about the “common tells” people say are giveaways that content is AI generated, such as em-dashes, repetitive sentence patterns, or specific words like “landscape.” But those are surface-level tells. The real indicator is whether you stay engaged or your mind drifts. Good content, whether AI-generated or human-written, pulls you in. Weak content feels like noise, no matter who or what created it.
You know quality when you're emotionally involved in what you're reading. When your attention wanders, that's the signal. It's the same experience you have with people: some colleagues are genuinely insightful, while others never say anything interesting even though they're technically competent.
The New Critical Skill: Judgment
AI is deceptively easy to use but really hard to master, and this gap explains most of the frustration around outputs.
Most people treat AI like a search engine. They type a query, take whatever it gives back, and get disappointed with the results.
Getting valuable output requires understanding how to coach the technology—much like coaching a person.
Here's a concrete example: If you ask ChatGPT for an answer and then ask for sources separately, you'll often get fabricated citations. But if you ask for the answer and sources together upfront, you'll get accurate work. The difference? You told it upfront that sources matter, so it planned accordingly. That's not obvious to most users.
This skill gap is already showing. Some people get remarkable results, while others blame their tools. And that gap shows up in unexpected ways. I recently used Sora to generate a video of myself riding a motorcycle and shared it with my family. My mother called me in a panic—asking what I was doing riding a motorcycle with no helmet. Meanwhile, my five-year-old daughter immediately knew the video was AI-generated. Detecting AI, and more importantly, detecting quality, is becoming a new form of literacy.
The AI Fluency Divide Is Already Happening
Entry-level workers often have more AI fluency than their managers. That's a reversal of typical skill hierarchies, and it creates tension. Microsoft's research shows that 78% of people now bring their own AI to work, but many lack guidance on using it effectively.
An unusual dynamic is emerging: People are using AI to judge AI output. For instance, companies screen AI-generated job applications with AI. Workers use AI to summarize floods of content, looking for signals in the noise.
We're in an arms race where both the problem and the solution involve the same technology.
The failures happen when individuals use AI carelessly—not checking sources, not editing, just hitting send. That's not an AI problem. That's a judgment problem.
What Organizations Should Do About “Slop,” AI-Generated or Not
First, treat AI skills as learnable, not innate.
The people who excel with AI didn't start that way. They developed intuition through practice. Organizations that provide structured opportunities to build that intuition through training, experimentation time, and sharing best practices will develop stronger teams. The key is patience: give people permission to experiment, make mistakes, and learn together.
Second, build quality checks into workflows.
Enterprise AI implementations work differently than consumer tools. They include built-in evaluations (evals) before anything goes to production. Individual users need similar guardrails: peer review processes, source verification requirements, and clear standards for what "good enough" looks like. The responsibility sits with the team: peers reviewing each other's AI outputs the same way they'd review any critical deliverable.
Third, focus on upskilling, not just deploying tools.
Providing access to AI without teaching people how to use it well is like handing someone car keys without teaching them to drive. The tools are powerful, but judgment still matters. Organizations need to invest in developing that judgment by teaching people how to prompt effectively, when to iterate, and how to verify output.
The opportunity here is significant: People with vision and critical thinking now have tools to execute ideas they couldn't before. A marketer who never learned video editing can create compelling visual content. An analyst who struggles with writing can communicate insights clearly. The constraint was never intelligence—it was technical execution. AI removes that constraint for people who know how to use it.