Most software dies young

My old ACCU friend Derek Jones has been beavering away at his Evidence Based Software Engineering book for a few years now. Derek takes an almost uniquely hard nosed evidence driven view of software engineering. He works with data. This can make the book hard going in places – and I admit I’ve only scratched the surface. Fortunately Derek also blogs so I pick up many a good lead there.

One of Derek’s most thought provoking finding is: most software has a very short lifespan.

At first this finding worried me: so much of what I’ve been preaching about software living for a long time is potentially rubbish. But then I remembered: what I actually say, when I have time, when I’m using all the words is “Successful software lives” – or survives, even is permanent. (Yes its “temporary” at some level but so are we, as Keynes said “In the long run we are all dead”).

My argument is: software which is successful lives for a long time. Unsuccessful software dies.

Successful software is software which is used, software which delivers benefit, software that fills a genuine need and continues filling that need; and, most importantly, software which delivers more benefit than it costs to keep alive survives. If it is used it will change , that means people will work on it.

So actually, Derek’s observation and mine are almost the same thing. Derek’s finding is almost a corollary to my thesis: Most software isn’t successful and therefore dies. Software which isn’t used or doesn’t generate enough benefit is abandoned, modifications cease and it dies.

Actually, I think we can break Derek’s observation into two parts, a micro and a macro argument.

At the micro level are lines of code and functions. I read Derek’s analysis as saying: at the function level code changes a lot at certain times. An awful lot of that change happens at the start of the code’s life when it is first written, refactored, tested, fixed, refactored, and so on. Related parts of the wider system are in flux at the same time – being written and changed – and any given function will be impacted by those changes.

While many lines and functions come and go during the early life of software, eventually some code reaches a stable state. One might almost say Darwinian selection is at work here. There is a parallel with our own lives there: during our first 5 years we change a lot, we start school, things slow down but still, until about the age of 21 our lives change a lot, after 30 things slow down again. As we get older life becomes more stable.

Assuming software survives and reaches a stable state it can “rest” until such time as something changes and that part of that system needs rethinking. This is Kevlin Henney’s “Stable Intermediate Forms” pattern again (also is ACCU Overload).

At a macro level Derek’s observation applies to entire systems: some are written, used a few times and thrown away – think of a data migration tool. Derek’s data has little to say about whether software lifetimes correspond to expected lifetimes; that would be an interesting avenue to pursue but not today.

There is a question of cause and effect here: does software die young because we set it up to die young or because it is not fit enough to survive? Undoubtedly both cases happen but let me suggest that a lot of software dies early because it is created under the project model and once the project ends there is no way for the software to grown and adapt. Thus it stops changing, usefulness declines and it is abandoned.

The other question to pondering is: what are the implications of Derek’s finding?

The first implication I see is simply: the software you are working on today probably won’t live very long. Sure you may want it to live for ever but statistically it is unlikely.

Which leads to the question: what practices help software live longer?

Or should we acknowledge that software doesn’t live long and dispense with practices intended to help it live a long time?

Following our engineering handbook one should: create a sound architecture, document the architecture, comment the code, reduce coupling, increase cohesion, and other good engineering practices. After all we don’t want the software to fall down.

But does software die because it fails technically? Does software stop being used because programmers can no longer understand the code? I don’t think so. Big ball of mud suggests poor quality software is common.

When I was still coding I worked on lots of really crummy software that didn’t deserve to live but it did because people found it useful. If software died because it wasn’t written for old age then one wouldn’t hear programmers complaining about ‘technical debt” (or technical liabilities as I prefer).

Let me suggest: software dies because people no longer use it.

Thus, it doesn’t matter how many comments or architecture documents one writes, if software is useful it will survive, and people will demand changes irrespective of how well designed the code is. Sure it might be more expensive to maintain because that thinking wasn’t put in but…

For every system that survives to old age many more systems die young. Some of those systems are designed and documented “properly”.

I see adverse selection at work: systems which are built “properly” take longer and cost more but in the early years of life those additional costs are a hinderance. Maybe engineering “properly” makes the system more likely to die early. Conversely, systems which forego those extra costs stand a better chance of demonstrating their usefulness early and breaking-even in terms of cost-benefit.

Something like this happened with Multics and Unix. Multics was an ambitious effort to deliver a novel OS but failed commercially. Unix was less ambitious and was successful in ways nobody ever expected. (The CPL, BCPL, C story is similar.)

In fact, this all starts to sound a lot like Dick Gabriel’s Worse is Better argument. Perhaps there is a pattern here.

Finally, what about tests – is it worth investing in automated tests?

Arguably writing test so software will be easier to work on in future is waste because the chances are your software will not live. However, at the unit test level, and even at the acceptance test level, that is not the primary aim of such tests. At this level tests are written so programmers create the correct result faster. Once someone is proficient writing test-first unit tests is faster than debug-later coding.

To be clear: the primary driver for writing automated unit tests in a test first fashion is not a long term gain to test faster, it is delivering working code faster in the short term.

However, writing regression tests probably doesn’t make sense because the software is unlikely to be around long enough for them to pay back. Fortunately, if you write solid unit and acceptance tests these double as regression tests.

Subscribe to my blog newsletter and download Project Myopia for Free

Verified by MonsterInsights