Sometimes you should fix what ain’t broke

DevOps has been with us long enough now that there’s a lot of legacy DevOps stuff running in every place I have eyes. Ignore items that don’t actually work; those parts that pose a serious security threat because they harbor vulnerabilities that have been patched – there’s a lot of “It works”, pipes in place and even a bit of “We’ve always done it this way”.

Considering that DevOps is still often touted as “the inevitable new trend” by marketers, I find this interesting; the fact that DevOps is now the de facto standard in business is a topic for another day. I mean things that work quietly, relieving you of the need or desire to think about them while focusing on issues that do need you to think about it.

But this kind of thing is exactly which created the need for DevOps in the first place. DevOps arrived like a wrecking ball and forced you to look at unnecessary steps that had piled up around elements of the organization/culture/infrastructure, often around those very “This is how we do it” or “It’s like that”. t broke, let’s look at this other thing, instead”, attitudes.

At Ingrained Tech, we have scripts that run our cloud backups. They are part of a fully automated process; we designed them to make sure our system knows what is backed up and when it is highly adaptable. The system works like a champ every night unless we tell it not to.

Our cloud storage provider has come a long way since we first used their API to get the data and wrote these scripts. In fact, customers like us are the reason the vendor maintains backwards compatibility with these APIs. Which means it’s time for us to review, use the latest APIs and see which of the many new features we want to take advantage of. “Good enough” is just that. The system is not broken – we get viable backups of it (which we needed not so long ago, so we to know we get good backups), but we don’t get the most out of them; we had to do a few hoop jumps to get the data back the way we wanted.

Our simple backup routines pale in comparison to similar “good enough” arrangements I’ve seen at large companies. “Good enough” arrangements that create increasingly severe slowdowns that sometimes resemble the exact environment that made the huge gains of early DevOps implementations inevitable.

Take a critical look at your infrastructure and toolchains. Look for fruits within reach and clear some of the baggage that has accumulated. Schedule process refresh days, where one or two team members specifically look at everything in the environment with a critical eye and see what they come up with.

And keep rocking it. Speed ​​bumps or not, you create the applications and infrastructure that keep the whole business running. Remember that and keep smiling as you work through recurring issues.

About Thomas Brown

Check Also

Winnipeg Jets Morning Newspapers | Illegal Curve Hockey

To note: Yesterday wasn’t the day, but maybe today could be the day the next …