Why do many operations teams prefer to deploy applications manually, even where automated deployment tools exist?
I’ve been a part of operations teams which deploy manually, but always felt this was wrong, and struggled to get the time and resources to implement automated deployment. But I’ve encountered teams which actually insist on manual deployment, even when the development team has provided automated deployment tools for them.
The usual explanation is that manually deploying an application is the only way to know exactly what changes are being made. On the face of it, belief in the reliability and auditability of humans over scripts is silly.
But this comes back to my previous point that Devops is a confidence game. Ops won’t easily trust a black box deployment tool given to them by developers until they’ve had loads of experience with it.
Even if the deployment tool is written in a script that the Ops folks could theoretically read and understand if they took the time, it’s a non-trivial thing. Sysadmins don’t have loads of time to trawl through some weird developer-oriented scripting language (“what the hell is Ant?”) and test the hell out of a deployment script that they may only use every few months or so.
Where I have seen deployment tools used in ops is where ops have been involved in creating the tool. Ideally, ops should pair with developers to design, write, and test the deployment tool. If this isn’t practical, the deployment tool should be developed using agile principles, with the Ops team as the product owner. Given that ops people are technical, this needs to go even deeper than a normal product owner relationship; they should be involved in the technical design and even code review.
If the ops team is intimately involved in the specification, design, and implementation of the deployment tool, then they will have the confidence that they understand the tool thoroughly enough to use it in environments where failure may mean getting out of bed at 3 AM to fix it.