You've used Chef/Puppet to automate your infrastructure, you can provision your virtual environment from scratch and deploy all your applications in minute. It’s magical. You've achieved Configuration Management Nirvana. What you've built is repeatable, saves time, increases efficiency and removes human error.
What you've built is awesome, but it’s not bullet proof.
When you write your Chef recipes or Puppet manifests you are coding, and this code is guaranteed to have defects throughout its lifetime.
If you're automating the deployment of a small system then you can get away with manual testing but as the system grows you'll begin to cut corners and only test what you can fit in before the next release.
You’ve automated your deployment, not your testing. Errors in your automation scripts will have just the same impact as errors made during manual deploys. Worse still, they will be repeated again and again.
It’s time to take a closer look at how you test your configurations. Repeat after me, “Automated does not mean tested.”
The ironic thing is that you’ve done the hard work required to implement automated testing already. You’ve gathered the requirements for your system. You’ve documented them in such detail that you have the confidence to automate your deploys based on them. Fantastic! So let’s take a step back.
In information technology we love to talk about maturity models. If I was to map one out for the path from a completely manually managed infrastructure to a fully automated one I’d include three distinct levels.
Your deployments are done manually and all testing is manual. You may have some monitoring in place but this is not analysing the configuration of your systems.
You are still deploying manually but you have implemented automated testing for your configurations. This means that you’re capturing your configuration information and verifying it continually and in an automated fashion, particularly before and after change. You still suffer the efficiency losses of manual deploys but your automated tests give you confidence that you any errors introduced as a result will be picked up.
You now move on to the final level. You take the information you have been using to test your configuration and you use it to automate your deployments. All the efficiency and deployment quality gains you’d expect from automation are now available to you. In addition to this though your maintenance of automated testing of your configuration means that you continue to be insured against errors, the ones introduced as part of automation as well as those introduced as a result of manual tinkering.
From what I’ve seen people have been skipping the second step. They’ve been so keen for the gains on offer from automated deployment that they’ve missed entirely the need for automated testing. They’ve also taken a harder road.
Any developer worth his salt these days should be able to tell you of the benefits of Test Driven Development, of writing a test for a desired outcome before you code the functionality that will provide it. At a high level they are twofold.
Firstly, you are immediately focussed on the outcome you require.
Secondly, the test you put in place will live forever, providing you with a readily available insurance policy against bugs introduced through later change.
By skipping this step a large majority of the proponents of Infrastructure as Code are missing these two key benefits. They lose focus of the desired configuration state and they put in place no protection against future bugs.
Tests may not be fun but they add tremendous value and represent an investment in quality. So remember, automation does not equal quality. Bullet proof your deployments with automated tests. Give UpGuard a shot (it's free to start) or contact us for a guided demo. Our charming experts would be more than happy to show you how easy it can be.