A Change We Deliberately Kept Manual

This article is from he Automation Decision Patterns series.

Intro

The change looked ordinary at first.

It was the kind of operational adjustment that shows up regularly in long-running environments: a configuration shift that had to be applied across several systems, coordinated with multiple teams, and executed during a narrow maintenance window.

On paper, it was a perfect automation candidate.

It was repeatable. It had clear steps. It was already documented. Someone even said the quiet part out loud:

“We should just automate this.”

There was pressure to do exactly that. Not from leadership, not from deadlines—but from habit. By this point in the series, you already know the reflex: if something feels procedural, we reach for automation.

But this story isn’t about a missed opportunity.

It’s about a deliberate choice to stop—and keep the change manual.


Context

This was a production-critical system with a long history.

Not greenfield. Not cloud-native. A layered environment with legacy components, modern services, and years of operational scar tissue. It supported real users and real business processes. Changes here carried weight.

The adjustment itself mattered because it affected how traffic moved through the system. Done correctly, it would improve stability under load. Done incorrectly, it could degrade performance in subtle ways that wouldn’t show up immediately.

Automation was attractive for obvious reasons:

  • The change had to be applied consistently.
  • It involved touching multiple nodes.
  • There was a real risk of human error.
  • Everyone wanted to reduce the operational burden.

It checked all the usual boxes.

And if we had framed the decision narrowly—Is this automatable?—the answer would have been yes.

But that wasn’t the question we ended up asking.


The pause

The pause didn’t come from technical difficulty.

It came from uncertainty.

As we walked through the plan, a few questions surfaced that didn’t have satisfying answers:

  • What would partial failure look like?
  • If something went wrong halfway through, how cleanly could we roll back?
  • Who would own the system state during execution—the automation, or the engineer on call?
  • Would we notice subtle regressions immediately, or hours later?
  • How confident were we that all environments behaved the same way?

None of these were blockers by themselves.

But together, they changed the shape of the decision.

We realized that while the steps were well understood, the system response wasn’t. There were edge cases we had only ever encountered manually. There were historical quirks that lived in people’s heads rather than documentation.

Most importantly, we recognized that automation would compress the blast radius.

A manual change unfolds at human speed. You make an adjustment, observe, then proceed. Automation removes those natural pauses. It’s efficient—but efficiency also means mistakes propagate faster.

Ownership came up too.

If this were automated, who would be actively watching it? Who would have their hands on the controls if behavior deviated from expectations? We didn’t want a situation where everyone assumed “the pipeline” was in charge.

That was the conceptual core of the discussion.

Not can we automate this?

But what do we lose if we do?


The decision

We chose to keep the change manual.

Not indefinitely. Not dogmatically. Just for this iteration.

That choice came with trade-offs we acknowledged openly:

  • It meant coordinating people instead of triggering a process.
  • It meant accepting a slower rollout.
  • It meant relying on experienced operators rather than repeatable logic.

But it also avoided risks we weren’t ready to accept:

  • We didn’t have high confidence in reversibility.
  • We didn’t fully understand the system’s behavior under every intermediate state.
  • We weren’t prepared to encode years of operational nuance into something deterministic.

This wasn’t fear.

It was clarity.

Automation would have given us consistency, but at the cost of immediacy and situational awareness. For this change, we valued being present while it happened.

So we planned it carefully, scheduled the window, and executed it manually—with engineers actively observing each phase.


Outcome

The change went smoothly.

But that’s not the interesting part.

What mattered was what the manual approach made possible.

Because we were directly involved, we noticed small behaviors we wouldn’t have captured in logs alone. Timing differences. Load patterns. Minor discrepancies between environments. None of them were incidents—but together, they deepened our understanding of the system.

That insight shaped the next set of decisions.

We adjusted follow-up work based on what we observed. We refined our mental models. We documented edge cases that had never been written down. We clarified ownership boundaries.

And when automation came up again later—because it always does—we approached it with better information.

Keeping this change manual preserved learning.

It also preserved flexibility. We could pause mid-flight. We could adapt in real time. We could respond to what the system was actually doing, not what we assumed it would do.

That wouldn’t have been as easy if everything had been abstracted behind a run button.


Reflection

Throughout this series, we’ve talked about automation as a decision—not an obligation.

This story fits that pattern.

We didn’t reject automation. We deferred it.

We recognized that the system hadn’t yet earned abstraction. There was still operational knowledge being discovered, and encoding it too early would have frozen incomplete understanding into process.

Importantly, this wasn’t a permanent stance.

As systems stabilize, as behaviors become predictable, as ownership becomes clearer—decisions change. What stays manual today may become automated tomorrow.

Maturity isn’t about automating everything.

It’s about knowing when not to.


Conclusion

This article closes the Automation Decision Patterns series with a quiet truth:

Some of the most important automation decisions are decisions not to automate.

Not because automation is risky.

But because understanding is still forming.

Restraint, in this context, isn’t conservatism. It’s respect for complexity. It’s choosing presence over speed. It’s allowing space for systems—and teams—to reveal how they really behave.

Automation earns its place through insight.

And sometimes, the most professional thing you can do is keep your hands on the change a little longer.