One of the common arguments you commonly hear against developer testing is that it’s waste of time because that time is spent not building features. It’s a short-sighted view that fails to consider things like the costs of resolving a production defect or the complexity of building large systems over time.
There’s sometimes a similar misconception about using automation in the development process. That is, the assertion that the time required to build and manage an automated workflow outweighs the benefit that it provides. And in many cases, this is true. If you’re running a WordPress blog or you are a solopreneur with a hobby online business, then automated software delivery should probably be the furthest thing from your mind (you should, however, consider other types of automation.) But for larger teams with more complex systems or processes, this story begins to change.
To help illustrate this point, let’s look at just one part of a typical application lifecycle: the build.
A decade ago, development teams were falling all over themselves trying to implement Scrum. Iterative software development wasn’t a new concept, but Jeff Sutherland and Ken Schwaber had formalized a framework whose main tenet was the rapid development of small-batch features with a concrete feedback loop.
As news spread about the early success of some high velocity Scrum teams, stakeholders on troubled development teams began to see Scrum as a silver-bullet. These folks misinterpreted Scrum’s acceptance of change and lean approach to requirements as an open door for scope creep and a total pass on documentation. Many of those projects failed, forty-five minute daily standup notwithstanding.
Fast-forward to today, and software delivery is in a similar place. Fierce competition is pushing the pace of release cycles. Widespread adoption of the cloud has changed both the shape of software and how infrastructure is managed. An ecosystem of supporting tools continues to evolve in rather spectacular fashion. Stakeholders are taking note.