What you're inheriting
Most of what teams have heard about AI in operations is about acceleration. Build faster. Ship sooner. Replace the manual process. What they haven't been told is what happens next — when the build is finished, the model is in place, and someone has to keep it running.
That someone is rarely the person who built it. And increasingly, the person who built it couldn't fully explain it either.
We're seeing a pattern in the operations we walk into. Systems that work. Systems that ship. Systems nobody in the room actually understands. Built quickly. Built quietly. Built by a model.
This isn't a piece about AI. It's a piece about what operational decay looks like when the original author wasn't a human.
AI doesn't introduce new categories of operational decay. It accelerates the existing ones.
The same failure modes we've described in other notes — the system nobody fully understands, the integration that became infrastructure, the workaround that became the workflow — all still apply. The mechanism is the same. The shape is the same.
The speed changes, and the legibility. A team that took a year to build a fragile system used to leave behind a year's worth of context. Code someone wrote. Decisions someone made. Choices someone could be asked about. A team that takes a week to build the same system, with a model writing most of it, leaves behind a week's worth of context. The system isn't lighter. The history is.
That's the inheritance problem, and it's the part of the AI conversation almost nobody is having.
Failure mode one: the code nobody wrote
The first place this shows up is in the codebase.
A team uses an AI assistant to build a tool. It works. It ships. The original developer is competent — competent enough to know what to ask for, and to test what comes back. Not always competent enough to read every line.
A year later, the developer has moved on. The tool is still running. Something breaks. The new team opens the code, and it doesn't read like bad code. It reads like code without an author. There are no comments explaining why something was done one way and not another. No patterns the team recognises. No person to ask.
The deeper problem is what the model didn't know to do. A human engineer writing operational code applies a security posture without being asked. Secrets go in environment variables. Inputs are validated. Permissions are scoped down. Not from the brief — from having seen what happens when those defaults are missing. The model writes what the brief asked for. The brief rarely includes what experience adds for free. So the API key ends up hardcoded. The endpoint logs the token. The IAM policy grants everything, because granting everything works. The reasoning the original developer would have brought to the work was never brought, because there wasn't an original developer.
The team is now responsible for a system that was never written down. It was generated.
This isn't a bug in the model. It's a feature of how the work was done. The assumption that documentation and training constitute ownership has always been wrong. The model makes that easier to see, because the original author can't be interviewed and the original reasoning was never written down.
Failure mode two: the integration that is non-deterministic
The second place this shows up is in the seams.
Operations rarely run on one system. They run on the connections between systems. Historically, those connections were APIs: explicit contracts, versioned, documented, observable. When something changed, the contract changed visibly.
A lot of recent AI work has involved replacing those contracts with model calls. A pipeline asks a language model to extract a value from a document. An agent decides which tool to use. A classifier routes a message. These are integrations. They're also non-deterministic.
The dependency doesn't appear on the architecture diagram. The behaviour can shift between model versions, often without anyone noticing until something downstream breaks in a way that doesn't reproduce. Nobody owns the contract because there isn't one. There's a prompt, a model, and an expectation.
It's a familiar shape of problem. A single integration quietly becomes load-bearing, and the rest of the system depends on it without anyone meaning for that to happen. The difference is that this integration is non-deterministic, and the documentation of what it does is approximate at best.
Failure mode three: the audit trail that is a probability
The third place this shows up is in the decisions.
Operational systems make choices. Which alert to escalate. Which transaction to flag. Which message to surface. When a human or a deterministic rule made the choice, the reasoning could be reconstructed. There was a record. There was a logic to point to.
When a model makes the choice, the record is the input and the output. The reasoning isn't stored anywhere, because the reasoning isn't a thing that happened. It's an inference about a thing that happened. You can ask the model why, but you're asking after the fact, and the answer is another model output.
For most operations, this doesn't matter on day one. It matters on day three hundred, when something has gone wrong and the question isn't what did the system do but why did it do that, and would it do it again. The audit trail that used to be a log is now a probability.
The same problem, three times
The three failure modes look different. They're the same problem.
Each one is an operational system built without the structural work that operations require, and the model making that omission easier, faster, and less visible.
It isn't new in kind. We've seen these patterns in human-built systems for years. What's changed is that the conditions that produce them — speed, low cost, partial understanding — now scale. A small team can ship more, faster, with fewer people who understand what they shipped. The orphan system used to be a five-year-old codebase. It's now a five-week-old one.
The fix isn't AI-specific
The diagnostic discipline that handles human-built operational decay handles this.
The questions are the same. Who maintains this when the people who built it are gone. What does this system actually depend on. What happens when something goes wrong and we have to explain it. Where is the contract, where is the logic, where is the record.
The answers, in AI-built systems, are often worse. The work has been compressed, but the work operations needs hasn't. The gap between what was built and what can be owned is wider, because the build is faster and the ownership groundwork is the same.
The teams doing this well aren't the ones who avoided AI. They're the ones who used AI to build, then did the structural work AI didn't do for them. They wrote down what the model couldn't. They put deterministic boundaries around the non-deterministic parts. They turned the prompt into a contract.
That's the work. It isn't glamorous. It isn't what the AI demos look like. It's what the inheritance looks like, three years on, when a system was built to be owned, not just shipped.
The question we ask
We ask the same question of every system we walk into. Would the people who depend on this be able to keep it running, without us, in three years.
When the system was built by a model, that question gets harder to answer. Not because models build badly. Because the conditions that produce models building badly are now the default conditions for building anything.
If you're inheriting a system the model wrote, the work in front of you is the work in front of any operations team inheriting a system someone else built. The difference is that the someone else is harder to find.
The model wasn't the system. It was the author of a system that now belongs to you.
