- ZeroBlockers
- Posts
- The Technical Debt Machine: How to Manage the Risk of AI Slop
The Technical Debt Machine: How to Manage the Risk of AI Slop
AI is the perfect yes-man. And yes-men are how organizations accumulate debt. This gap produces what we call "slop": output that looks finished and sounds plausible but contains debt that will slow down future work.
AI will do whatever you ask it to do. That's both its strength and its weakness.
Ask it to build a feature, and it will. Ask it to write a database query, and it will. Ask it to design a user flow, and it will. It won't question whether the feature solves the right problem. It won't warn you that the query will create performance issues at scale. It won't point out that the user flow introduces friction that will kill conversion.
AI is the perfect yes-man. And yes-men are how organizations accumulate debt.
In the pre-AI era, this debt accumulated slowly. Junior employees made mistakes, but they made them slowly enough that senior people could catch them. Code reviews caught architectural problems. Design critiques flagged usability issues. Product reviews questioned whether features actually solved customer problems.
AI has removed those brakes. You can now accumulate a year's worth of technical and product debt in a single afternoon.
The root cause is context. Experienced professionals carry years of pattern recognition plus deep knowledge of your specific product, including what's been tried, what failed and why certain decisions were made.
AI has the sum of human knowledge, but it knows nothing about your project. It doesn't know about the weird edge case your support team sees every Tuesday, the architectural decision you made eighteen months ago, or the user segment you're actually solving for.
This gap produces what we call "slop": output that looks finished and sounds plausible but contains debt that will slow down future work. The code works, but violates your patterns. The design looks polished, but contradicts your system. The feature functions, but solves a problem your users don't have.
The solution isn't to wait for better AI (which will come). It's to give AI what it's missing: principles.
The Principles Paradox
Every team agrees that principles matter. Few teams actually maintain them.
You've seen the artifacts. The engineering team's architecture decision records haven't been updated in eighteen months. The design system documentation that describes components you deprecated two releases ago. The product principles deck that was presented once at an offsite and never referenced again.
These documents exist because someone, at some point, believed they were important. They fell into disrepair because, in practice, the team could function without them. The principles lived in people's heads. They were transmitted through code review comments, design critiques, and hallway conversations. The documentation was a nice-to-have, not a necessity.
This worked. Not perfectly, but well enough. The senior engineer who'd been there four years knew why you didn't use that ORM pattern. The designer who'd run the usability studies knew why modals were reserved for destructive actions. The PM who'd talked to a hundred customers knew which problems were worth solving.
The principles were real. They just weren't written down.
Why Principles Are Now Mandatory
AI doesn't absorb context. It doesn't attend standups. It doesn't overhear the conversation about why the last redesign failed. It doesn't remember the architectural decision you explained three sessions ago.
Every AI session starts from zero. The context window, that small working memory where AI holds information about your project, resets with each conversation. Whatever principles aren't explicitly provided don't exist.
Why can’t AIs learn?
They have a forgetting problem - just like humans. When new knowledge comes in they can arbitrarily forget old information. This is an area of active research, so things like Memory are being added to models but they are limited in scope for now.
This changes the economics of documentation entirely.
When humans were doing the work, undocumented principles created friction: slower onboarding, occasional mistakes, knowledge silos. Annoying, but manageable.
When AI is doing the work, undocumented principles create chaos. Every gap becomes a guess. Every missing constraint becomes an opportunity for debt. The AI will confidently build something that violates your patterns because it doesn't know your patterns exist.
The teams that thrive with AI aren't the ones with the best prompts. They're the ones who've done the unglamorous work of writing down what they believe - and keeping it current.
Architectural Principles
Good architecture has always meant modularity: small, self-contained units with clear boundaries and explicit dependencies. This mattered for maintainability and testing. Now it matters for AI.
When a module is self-contained, AI can understand it completely within its context window. When code sprawls across dozens of files with implicit dependencies, AI guesses at connections it can't see. Modularity isn't just good engineering anymore; it's a context management strategy.
But modularity and event-driven patterns are just one way of designing scalable systems. Every piece of code also needs to follow clean code principles. Luckily, most AI coding tools now support rules files such as AGENTS.md that are loaded into every conversation.
The instinct is to fill these files with documentation. Resist it.
Rules files should contain constraints, not comprehensive guides. AI doesn't need to understand your entire architecture. It needs to know what not to do and what patterns to follow. Think of it as guardrails, not a map.
Effective architectural principles are brief and specific.
"Use the repository pattern for all database access."
"Never import from the /legacy folder."
"All API endpoints require authentication middleware."
"Prefer composition over inheritance."
These aren't explanations - they're rules.
Anti-patterns often matter more than patterns. AI will find reasonable-looking solutions that happen to violate your conventions. Telling it what to avoid prevents the most common mistakes. "Don't use raw SQL queries outside the data layer."
"Never store credentials in code."
"Avoid adding new dependencies without checking package.json first."
Link to detail rather than embedding it. If AI needs deeper context for a specific task, point it to the relevant documentation.
"For authentication patterns, see /docs/auth-patterns.md."
This keeps the rules file focused while making detailed guidance available on demand.
Key Tip: Keep the rules file short.
Research suggests that as instruction count increases, instruction-following quality decreases uniformly. A file with hundreds of guidelines will be partially ignored. A file with twenty essential constraints will be followed.
Product Principles
AI can build what you describe. It cannot infer what you haven't said.
This makes explicit product principles non-negotiable. The strategy in your head, the user needs that emerged from research, the priorities that everyone "just knows" are invisible to the AI. This leads the AI to optimise for the wrong things.
A product principles document should answer the questions AI can't figure out from code alone. What is this product? Who is it for? What does it value? What trade-offs does it make deliberately?
"We optimise for simplicity over power-user features."
"We never interrupt the core workflow with upsells."
"Speed is a feature - every interaction should feel instant."
These statements guide hundreds of micro-decisions. They're the difference between AI building something that technically works and AI building something that fits your product.
Decision logs capture context that would otherwise be lost. When you make significant product choices, write down what you decided and why.
"We chose not to support real-time collaboration because our research showed users primarily work alone."
Without this, AI might enthusiastically build collaborative features that contradict your strategy.
User research needs to be compressed into portable artifacts. Research typically lives in sprawling documents that AI will never see. Distill it into one-page personas, jobs-to-be-done statements, and lists of known pain points. These travel with your prompts. They're the difference between AI building what users need and AI building what sounds plausible.
Design Principles
A design system has always been valuable. Now it's mandatory.
Without documented design principles, every AI-generated interface is a fresh invention. It might be good. It might even be better than what you have. But it won't be consistent with what you've already built. Multiply this across dozens of AI-assisted sessions and you get a product that feels like it was designed by a committee that never met.
Written design principles guide AI toward your patterns even when building novel interfaces.
"We use progressive disclosure - show simple options first, reveal complexity on demand."
"Error states explain what went wrong and what to do next."
"We prefer inline validation over form-level validation."
Component documentation needs usage guidelines, not just specifications. AI knows what a modal is. It doesn't know when your product uses modals versus slide-overs versus inline expansion.
"Use Modal for confirmations requiring immediate decisions. Use SlideOver for forms that might take time. Use Toast for non-blocking feedback."
Without this, AI picks whatever seems reasonable, inconsistently.
Document your patterns for common states. How does your product handle empty states? Loading states? Error recovery? Onboarding? If these patterns aren't written down, AI reinvents them each time, differently.
The Unexpected Payoff
Teams that document principles for AI discover something surprising: the documentation works even better for humans.
New engineers onboard faster because the principles that were previously trapped in senior heads are now accessible. Disagreements resolve more quickly because there's a reference point beyond "I think" and "I feel." Code reviews become more focused because reviewers can point to documented patterns rather than explaining from scratch.
The modularity that helps AI also helps your team. Self-contained modules are easier to understand, test, and maintain for humans too. Bounded product documents are easier to keep current than sprawling specs. Component-based design systems were always better; now there's no excuse not to build them.
The work feels tedious in the moment. Writing down principles you already know seems redundant. Maintaining documentation feels like overhead. But the forcing function of AI produces artifacts that serve the entire team. It demands the rigour we always should have had, and it punishes the shortcuts we'd been getting away with.
AI will keep improving. Context windows will grow. But the discipline of explicit principles won't become obsolete. It will remain the difference between teams that ship coherent products and teams that accumulate debt.
Write down your principles. For the AI, yes. But also for yourselves.
This is the fourth article in a series exploring how AI is reshaping career progression. The next article examines how to manage the shift from a focus on outputs to a focus on outcomes. If you want to go deeper, check out our free ebook: Managing your Career in the Age of AI.