Wednesday, December 24, 2008

Developing Applications more Efficiently

Your boss walks up to you and says, "we need to be more efficient". This seems reasonable; efficiency lowers costs, and assuming you don't sacrifice quality, who in their right mind would argue that this is a bad thing. But now think about it a little. How can you improve efficiency without being able to measure it? And how do you go about measuring efficiency when it comes to building software? That is what I want to discuss today.

The opening sentence in Tom DeMarco's Controlling Software Projects says, "You can't control what you can't measure". Robert L. Glass counters this in Facts and Fallacies of Software Engineering arguing that this is a fallacy. Both are well respected practitioners and authors in the field. So who is right?

Digging deeper into metrics you find the word "subjective" used a lot. DeMarco in his book Deadline even says, "don't sweat the units - while you're waiting to achieve objective metrifications, use subjective units". And then there is the cost of gathering these metrics. Robert Glass notes in his book that studies at NASA-Goddard found the cost gathering and processing metrics to be 7 to 9 percent of the project. Even though there are those that say that metrics provides a valuable benefit, from the outside it feels like that they are losing efficiency in order to measure a subjective value.

I am not dismissing the value of metrics, but I am arguing that it might not always be the best approach. For small teams working on small projects it may be difficult or impossible to define a "function point", one of the popular metrics used for analysis. Perhaps there is an alternative. What if you measured efficiency by the perceived value of changes in procedures. It is still subjective, but does not carry the overhead of other methods.

I think it may help to understand my thinking if I provided a concrete example. Let's say that most of the work you do are small browser-based applications, and each of them is constructed in a similar manner. They typically all use Hibernate and Spring, use the same security framework, always use Velocity for generating emails, use JNDI, external properties for configuration, and a dozen other similarities. Now let's say that you do this often enough to know that it takes about 45 minutes to setup a new project, and that you will probably build 20 or so applications like this in the next year. So we could reasonably put a number of 15 hours spent per year on setup, and that number is much higher if you need to teach others how to do the setup. Based on this, if you spent a couple of hours building a prototype project, or perhaps a couple of them, you have become more efficient.

Obviously a savings of 15 hours per year isn't spectacular, but that is one small change. What if you can make small change like this every week? All of a sudden we are talking about nearly 800 hours per year, or roughly five months of developer time. Again, we are talking about each small team, so this is considerable. And your small team didn't get any smaller, so this gets compounded because the team can now complete more projects.

Again, I am not dismissing the gathering of metrics, I am just looking at the problem from a different perspective. Instead of measuring output I am measuring the savings due to added efficiencies. If I can introduce measurable efficiencies without degrading product quality, I don't need to know what my output is in order to know that things are more efficient.

Author's Note: Thought is an ongoing process. I can't conclude with any certainty that my theories are accurate. Accuracy can only be measured by real world use. If you have different conclusions, either in theory or practice, I would like to hear them.

No comments: