Monday, 25 November 2013

Three Surprising Causes of Technical Debt

I’ve been thinking a lot about technical debt over recent years. The codebase I work on has grown steadily to around 1 million lines of code (that’s counting whitespace and comments – there’s ‘only’ about 400K actual statements), and so the challenge of maintaining control over technical debt is one I think about on a daily basis.

What I’ve also noticed is an ever-increasing number blogs, books, conference sessions and user group talks tackling this subject. It seems lots of us are facing the same issue. But why is it that so many of us are we getting ourselves into such difficulties? What’s causing all this technical debt? I’ve got some ideas, which you might find a little surprising…

1. Computers are too Powerful

In my first programming job I wrote a C application to run on a handheld barcode scanner. My development machine was a green-screen 286 laptop, which I dubbed “the helicopter”, due to the racket its fan made. The application consisted of about 20 text-based screens, some “database” management, and some serial port communications code. If I made a change that required a full rebuild, I’d go for a walk, because it would take my development machine a couple of hours. That application had 9,423 lines of code (4,465 statements). I couldn’t use that machine to build a one million line system even if I wanted to. The build would take three years.

Compare that with the development machine I use today. It can do a clean build of my million line codebase in under 30 minutes including running the unit tests. An incremental build takes a couple of minutes. I can have the client solution (90 sub-projects) and the server solution (50 sub-projects) both running under the Visual Studio debugger, with CPU and RAM to spare. All one million lines of code are in a single source code repository, and our version control system barely breaks a sweat.

What’s more the end users’ computers have more than enough power. Our client and server can run quite happily on low-end hardware. We could probably add another million lines of code before we ran into serious memory usage issues.

But why is this a problem? Hasn’t Moore’s law been absolutely brilliant for the software development world? Well yes and no. It allows us to do things that were simply beyond the realms of possibility only a decade ago. We can create bigger and more powerful systems than ever before. And that’s exactly what we’re doing.

But imagine for a moment if Visual Studio only allowed a maximum of 10 projects in a solution. Imagine your version control system could cope with a maximum of 10000 lines of code. Imagine your compiler supported a maximum of 100 input files. What would you do?

What you’d do is modularise. You’d be forced to use vertical layering as well as just horizontal layering. You’d seriously consider a micro-service architecture. In short you wouldn’t be able to create a monolith. You’d have to create lightweight loosely coupled modules instead.

The bigger a codebase is, the more technical debt it contains. And the power of modern computers makes it far too easy to create gigantic monolithic codebases.

I’m not complaining that we need less powerful computers. We just need the self-discipline to know when to stop adding features to a single component. Because the computer isn’t going to stop us until we’ve gone way too far.

2. Frameworks are too Powerful

For me, it started with VB6, and .NET took it to a whole new level. It was called “RAD” at the time – “rapid application development”. Things that would take weeks with the previous generation of technologies could be done in days. In just a few lines of code you could express what used to take hundreds of lines.

Now in theory, this is good for technical debt. Fewer lines of code means more maintainable code right? Well yes and no.

The rise of all-encompassing frameworks like the .NET FCL, and all the myriad open source libraries that augment it, mean that our code is much denser than before. In 20 lines of code using modern languages and frameworks, we can do far more than we could achieve using 20 lines of 1990 code. This means that a modern 10KLOC codebase will have many more features than a 10KLOC codebase from 20 years ago.

So not only has the growth in computing power meant we can write more lines of code than were possible before, but the growth in framework capabilities has meant we can achieve more line for line than ever before.

How is this a problem? If a feature that would take 1000 lines of code to implement without a powerful framework can be implemented in 100 using a framework, doesn’t that make the code more maintainable? The trouble is that we can’t properly reason about code that uses a framework if we know nothing about that framework.

So if I need to debug a database issue and the code uses Entity Framework Code First, I’ll need to learn about Entity Framework to understand what is going on. And in a one million line codebase, there will be quite a lot of frameworks, libraries and technologies that I find myself needing to understand.

It’s not uncommon to find several competing frameworks co-exist within the same monolithic codebase, so there’s Entity Framework, but there’s also NHibernate, and Linq2Sql and some raw ADO.NET and maybe some Simple.Data too. There’s XML serialization, Binary serialization and JSON serialization. There’s a WCF service, a .NET remoting service, and a WebAPI service. There might be some custom frameworks that were created specifically for this codebase, but their documentation is poor or non-existent.

The bigger the codebase, the more frameworks and libraries you will need to learn in order to make sense of what’s going on. And unless you understand these frameworks, it will seem like things are happening by magic. You know the system’s doing it, but you have no idea where to put the breakpoint to debug it. And you have no idea which line of code you need to modify to fix it.

I’m certainly not complaining that we need to stop using frameworks or open source libraries. I’m not even suggesting that we need to standardise on one single framework and enforce it right through the codebase. What I am suggesting once again is that if our large systems were composed out of much smaller pieces, rather than being monoliths, the number of frameworks you’d need to learn in order to understand the code would be greatly reduced. And if you didn’t like their choice of framework, you’d have the option to rewrite that component with a better one.

3. Programming is too Easy

My final reason why we are finding it so easy to create mountains of technical debt will perhaps be the most controversial of the three. It was inspired by Uncle Bob Martin’s recent article “Hordes of Novices”. In his article he laments the fact that the software industry is flooded with novice developers writing bad code. We’d get more done, he argues, with fewer programmers writing better code. And I agree with him. He questions whether we do in fact really need more code to be written…

Or do we want less code? Less code that does more. Much less code, written much better, doing much, much more?

But why is the software industry letting novices loose on their code? Because programming is too easy. Give a junior developer a set of clear requirements, and eventually they’ll produce some code that (sort of) does what was requested of them. They might take longer than the senior developer, but they will finish.

And so from management’s perspective, adding more programmers appeared to work. We sneaked in a few extra features that the senior developers didn’t have time for. The junior developers work slower, but they’re paid less, so it all balances out in the end.

But what we know is that there is more difference between the novice and senior programmers’ code than just how long it took to write. For starters, the novice programmer will create more lines of code, often by factor of 10 or more. So maintainability will be an issue. Also, the novice programmer will typically have more bugs in their code. For sure, the testers will have found the obvious ones. But I’m talking about time-bomb bugs, accidents waiting to happen, like storing the date-time in local time rather than UTC. When the clocks go back and the system falls over, it will be the senior developers sorting out the mess.

And that’s not to mention areas like performance and error handling, which novices typically do a poor job at. Or simply the way they chose to implement the feature. Very often a novice will make code changes in all the wrong places, with cut and paste coding instead of isolating the change, or injecting special knowledge into parts of the system that are supposed to be generic. Their unmaintainable code sprawls right across the system, making it extremely difficult to undo the mess.

Of all my three reasons for technical debt, this was the one I least wanted to write about, because it seems mean-spirited and ego-centric. “I’m the senior developer and I my code is flawless – it’s you novices that are messing things up and causing the technical debt problem”. But the blame game doesn’t get us anywhere. They may be novices, but usually they are conscientious and professional. They’re doing their best, and learning as they go.

In any case, what would the solution be? Hire no novices? Where will the next generation of master craftsmen come from? Once again I think the answer comes in the form of composing a large system out of much smaller loosely-coupled and easily replaceable components. If you must be quick to market, by all means let a junior developer make a small (1000 line of code) module. But if it turns out to be buggy or unmaintainable, then let the senior developer delete the whole thing and code it again from scratch. The novices who progress to the next level are the ones whose components don’t need to be thrown away.

The bigger the codebase, the more code it will contain which was written in sub-optimal way, often by novices. But in a big codebase, getting rid of bad code is extremely difficult. So if you must let novices write code, isolate their work in small replaceable modules, and don’t be afraid to throw them away at the first sign of trouble.

In fact, this applies equally to senior developers. If everyone is creating small replaceable components, technical debt can be tackled much more effectively.

Conclusion

The reason so many of us are deep in technical debt is it’s become far too easy to rack up a huge amount of debt before you realise you’re in trouble. Our computers are so powerful they don’t warn us that we’re trying to do too much. New frameworks allow us to write succinct and powerful code, but each one adds yet one more item to the list of things you must understand in order to work on the codebase. And you can hire a dozen novice programmers and two months later add a dozen shiny new features to your codebase. But the mess that is left behind will probably never be fully cleaned up.

What’s the solution? Well I’ve said it three times already, so it won’t hurt to say it again. The more I think about technical debt, the more convinced I become that the solution is to compose large systems out of small, loosely-coupled and replaceable parts. Refactoring is one way to eliminate technical debt, but it is often too slow, and barely covers interest payments. Much better to be able to throw code into the bin, and write it again the right way. But with a monolith, that’s simply not possible.

Disagree? Tell me why in the comments…

1 comment:

Josh Panzarella said...

As I'm reading two issues, one being the problem with novice developers and another being an issue with unknown frameworks, I thought of what might seem a trite solution. Have the novice developers research and document (to the best of their abilities) the frameworks so that the senior developers can get much quicker insight on the tools they are working with. Therefore, the novices can research and learn best practices by working individually or with senior developers, and the senior developers can be left to complete their tasks otherwise.