Image generated using GPT-5 on September 23rd, 2025.
Had a realization during yesterday’s code review that made me laugh, then think, then worry a bit. I wasn’t actually reviewing code anymore—I was tasting it. Checking for that distinctive LLM aftertaste, the multiply-nested containers that scream “Copilot was here,” the confident-but-wrong comments that feel machine-generated. I’d become what I’m calling an AI Slop Connoisseur. The term “AI slop” has exploded across tech circles this year, and for good reason. It’s the perfect descriptor for the high-volume, low-effort generative content flooding our feeds, codebases, and knowledge systems. But here’s what’s fascinating: we’re not just drowning in slop—we’re developing sophisticated palates for detecting, grading, and rehabilitating it.

The Slop Taxonomy We’re All Learning

Every platform has its signature flavor of slop now. On social media, it’s those surreal “cat soap operas” and “space baby” videos engineered to hack recommendation algorithms (The Guardian). In search results, it’s the confidently wrong AI summaries—remember when Google’s AI Overview suggested putting glue on pizza? (Forbes) The publishing world is swimming in AI-generated books that require disclosure on Amazon KDP (Amazon). Heck, even this article is AI slop. It’s just chunky with some nuggets of my actual insight. But the enterprise slop hits differently. It’s more insidious because it looks productive.

The Vibe-Spec PRD Problem

You know the type—that beautifully formatted product requirement document that reads like butter but has zero citations, no user interviews, and assumptions that dissolve under scrutiny. It’s what I call a “vibe-spec”: all narrative, no grounding. These documents behave exactly like technical debt, except instead of code, it’s knowledge debt that compounds over time. The parallels to documented failures are striking. Remember the 2023 court sanctions for lawyers using fake ChatGPT citations? (Reuters) Or the Chicago Sun-Times’ AI-generated reading list of books that don’t exist? (The Guardian) These aren’t edge cases—they’re patterns.

Code That Smells Like ChatGPT

The research is backing up what we’re all feeling. Stanford’s study found AI assistants correlate with more insecure code patterns unless explicitly constrained (arXiv). Veracode’s 2025 audit found nearly half of AI-generated code contains security flaws (TechRadar). And there are telltale code smells—those multiply-nested containers, duplicated logic, missing validation—that experienced developers can spot from across the room (arXiv). What’s wild is how this changes our job descriptions. We’ve gone from being developers to being code sommeliers, swirling pull requests in our mental wine glasses, checking for notes of “was this tested?” and hints of “did anyone actually think about edge cases?”

The New Knowledge Work: Slop Curation

Here’s the uncomfortable truth: AI slop is tech debt with a marketing department. It promises velocity but delivers maintenance burden. Every vibe-spec PRD needs grounding later. Every smell in the codebase needs refactoring. Every uncited claim needs verification. The role shift is real. Engineers aren’t just writing code—they’re editing bot-authored diffs. Product managers aren’t just defining requirements—they’re reverse-engineering assumptions from AI drafts. Data analysts aren’t just running queries—they’re validating whether that impressive-looking chart actually maps to reality. And the kicker? Sometimes we’re curating slop from past-us. That feature branch you had Claude write last month? Now you’re the one untangling its logic, adding the validation it missed, fixing the security patterns it ignored. We’re becoming critics of our own automation.

The Connoisseurship Culture Emerging

Something interesting is happening in response. Teams are developing implicit norms around “good stock”—artifacts with traceable claims, testable behavior, secure defaults. Even without formal policies, we’re collectively learning to taste the difference. I’ve started seeing developers share “slop patterns” in Slack like wine tasting notes:
  • “This PR has strong Copilot bouquet with hints of untested edge cases”
  • “Getting definite ChatGPT vibes from this documentation—smooth read, zero citations”
  • “Classic Claude architecture here—works perfectly for the happy path, falls apart at scale”

Two Slop Vignettes From Last Week

The Sprint Review Special PM presents a crisp narrative about our new feature. Beautiful slides, confident bullets, trendy references to “agentic workflows” and “compound AI systems.” The engineers start asking questions: Where’s the user research? What’s the acceptance criteria? Can we see the data behind that growth projection? Turns out the entire deck was ChatGPT-generated from a two-line prompt. We spent the next three sprints doing the actual discovery work that should have happened first. Classic knowledge debt compounding. The Midnight Vulnerability Security alert at 2 AM. A pattern common in LLM-generated code had introduced a vulnerability in our auth flow. The branch had passed tests, looked clean in review. But three weeks later, we’re doing emergency patches and explaining to leadership how “AI-accelerated development” led to a potential data breach. The cleanup took longer than writing it properly would have. Security debt, with interest.

The Reality of Our AI-Assisted Future

Look, I’m not anti-AI. These tools are incredible when used thoughtfully. But we need to be honest about what’s happening: we’re all becoming professional slop detectors, spending increasing cycles on quality control for machine-generated artifacts. The real skill isn’t using AI tools—it’s knowing when their output is Actually Good™ versus when it’s just well-formatted technical debt. It’s developing that palate, that instinct for smelling the slop before it ships. As one engineer put it in our retro last week: “I feel like a restaurant critic who only eats at AI-generated restaurants. Everything tastes vaguely the same, nothing has soul, and I spend most of my time sending dishes back to the kitchen.”

What This Means For Teams

The enterprises that will thrive aren’t the ones generating the most content—they’re the ones with the best slop detection and rehabilitation systems. The competitive advantage isn’t in producing more; it’s in knowing what’s worth keeping. We’re entering an era where curation skills matter more than creation skills. Where the ability to taste-test AI output for quality, security, and maintainability becomes a core competency. Where “AI Slop Connoisseur” might actually belong on your LinkedIn. The slop isn’t going away. If anything, it’s accelerating—research shows a 7x year-over-year jump in AI slop domains (Digiday). But maybe that’s okay. Maybe becoming connoisseurs is exactly the adaptation we need. After all, sommeliers exist because wine got complicated enough to need experts. Perhaps AI Slop Connoisseurs are just the natural evolution of knowledge work in the age of generated everything. Just remember: that sweet-tasting vibe-spec might pair well with this sprint’s velocity metrics, but the aftertaste of tech debt lingers for quarters.

References

  • How engagement-farm videos exploit recommendation systems with surreal AI content. (The Guardian)
  • Google’s AI Overview suggesting glue on pizza—when automated answers launder misinformation. (Forbes)
  • Amazon’s disclosure requirements for AI-generated books on Kindle Direct Publishing. (Amazon)
  • Court sanctions for lawyers submitting fake ChatGPT-generated case citations. (Reuters)
  • Chicago Sun-Times’ AI-generated reading list featuring non-existent books. (The Guardian)
  • Stanford study on correlation between AI coding assistants and insecure code patterns. (arXiv)
  • Veracode audit finding security flaws in nearly half of AI-generated code. (TechRadar)
  • Research on characteristic code smells in LLM-generated code. (arXiv)
  • Ad-tech analysis showing 7x year-over-year increase in AI slop domains. (Digiday)