Quantifying the Value of Automation

One of the common arguments you commonly hear against developer testing is that it’s waste of time because that time is spent not building features. It’s a short-sighted view that fails to consider things like the costs of resolving a production defect or the complexity of building large systems over time.

There’s sometimes a similar misconception about using automation in the development process. That is, the assertion that the time required to build and manage an automated workflow outweighs the benefit that it provides. And in many cases, this is true. If you’re running a WordPress blog or you are a solopreneur with a hobby online business, then automated software delivery should probably be the furthest thing from your mind (you should, however, consider other types of automation.) But for larger teams with more complex systems or processes, this story begins to change.

To help illustrate this point, let’s look at just one part of a typical application lifecycle: the build.

Build Automation

Consider the typical web app. You zip up some code, whip up some quick release notes, and hand the whole thing off to operations. Or maybe you’re lucky and just FTP some files directly to production and make a couple of quick database updates using your IDE. Right?

Sounds easy on the surface. But when you break it down into its individual steps, it quickly becomes clear that getting a build out to production is a fairly complex undertaking:

  1. Pull candidate code from version control. Make sure it builds.
  2. Update the README and CHANGELOG documents, if your team subscribes to such nonsense. (No? And why not?)
  3. Run unit tests, static code analysis, code coverage analysis, cyclomatic complexity checks, documentation generators, and so on. Compile the results into some meaningful format for distribution to the team.
  4. If any unit tests tests failed, attempt to identify the responsible parties and send a nasty email. Shame said parties by CC’ing everyone in the office.
  5. Deploy the application to a production-like test environment and run integration and automated functional tests. Once again, compile the results into some meaningful format for distribution to the team.
  6. If any of these tests failed, send another nasty email. This is probably to the same developer whose unit tests were failing, and this person hates your guts by now.
  7. Package the build, including any required shell scripts (don’t forget to test these also), database scripts, and other artifacts.
  8. Publish the package to the official package store, e.g. Artifactory, Nexus, NuGet, a network share, etc.
  9. Write detailed deployment instructions and a rollback plan. Hopefully you’ve got a good template in place for this, because this shit takes forever.
  10. Notify the team that the build is ready to go, including links to the deployment bundle and release documentation. Wait for the next maintenance window, probably a Friday night or early Saturday morning.
  11. Make final deployment preparations, e.g. manually update any secrets or other configuration data that aren’t committed to version control, backup the current production files and database, etc.
  12. Deploy the build. Depending on how hosting is setup, this may or may take the production application offline.
  13. Smoke test and run automated functional tests if you have them. If there are any issues, proceed with the rollback plan.
  14. If necessary, notify the testing team to begin manual regression testing of the production application.
  15. If everything’s a go, let the team know that the new release is live and send out any related documents (e.g. changelogs, readmes, etc.) Tag the release and make final VCS updates. Have a beer on the way home if bars are still open.

Fifteen steps, and this is the happy-path. When things go wrong (and make no mistake, they eventually will), things get considerably more complicated. The list will get much, much longer. If you’ve ever been caught up in a botched deployment that took all weekend to fix, you know what I mean.

In any event, let’s roll these tasks up into a handful of buckets and estimate how long it takes to get from source control to production:

Compile and package build, VCS management 3 hours
Testing and testing related activities 3 hours
Documentation, emailing, other communications 2 hours
Deployment, smoke testing: 4 hours (2 hours x 2 team members)
Manual regression testing: 8 hours (4 hours x 2 team members)

This manual deployment lifecycle is fairly typical for a non-trivial web application, and all told takes around twenty hours to complete. That’s roughly two and a half man days, assuming that the people responsible for the work aren’t interrupted with meetings, conference calls, email, chatty coworkers, cat videos, and so on.

The Real Cost

So what’s the cost? If we estimate conservatively and don’t account for context switching costs, diagnostic time for failed deployments, time for people who are peripherally involved, and opportunity costs for work that team members aren’t doing because they’re entrenched in build stuff, it breaks down somewhere along these lines:

Let’s assume your senior developer/release engineer and senior operations engineer both have a salary of $100,000 per year (actual cost ~$130,000). This equates to roughly $62.50 per hour for each. You’ve got a couple of rock solid test engineers at $65,000 (actual cost ~$84,500), which is around $40.00 per hour.

According to the estimates above, this means that the total cost of deploying the build is somewhere in the neighborhood of (12 hours * $62.50) + (8 hours * $40.63) = $1,075.00.

If running two-week-long sprints and deploying to production once a month, the annual people cost of those deployments is roughly (1,075.00 * 12) = $12,900. Not a fortune, but I bet your lowest paid developer would jump all over a $13K annual pay increase.

Things get more interesting, however, when you have other production-like environments involved. A client services firm running on the same production build cycle, for example, will probably also have internal QA and external UAT environments. If QA builds go at the end of every sprint and UAT builds more or less mirror the production schedule, costs will be five times higher: (1,075.00 * 12) + (1,075.00 * 12) + (1,075.00 * 24) = $64,500.00.

In an enterprise–where the useful life of an application can easily reach ten years–using this same model means the total cost of just deploying code starts at around $700,000.00.

And this is for one app.

The Real Value

There’s other, more intangible value in build automation: increased productivity, higher quality code, decreased downtime, enhanced credibility and customer satisfaction, and so on. But the potential long-term financial payoff should be a major motivator if you’re managing multiple enterprise applications or applications for multiple clients.

It’s worth noting that if you’re running small shop with a handful of employees, you’re direct financial upside is going to be considerably smaller, probably a wash after you factor in the costs associated with running and managing the build infrastructure. But the opportunity costs will be hugely magnified. At a startup with one developer, for example, every hour spent on infrastructure or managing builds is an hour not spent working on the product. For a business with a complex software product and a short runway, the consequences here could be grave.

Finally, if you’re working on your boss or the management team trying to sell devops, demonstrating concrete value may just be your ticket. Anchor the investment you’ll make against the ongoing hard and intangible costs associated with manual processes (builds, provisioning and managing servers, testing, downtime, etc.) and low velocity (market agility, customer satisfaction, competitive advantages, etc.) and you should have a slam-dunk case.

Good luck!

Bonus: Automation value worksheet

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *