Monday 25 November 2013

Three Surprising Causes of Technical Debt

I’ve been thinking a lot about technical debt over recent years. The codebase I work on has grown steadily to around 1 million lines of code (that’s counting whitespace and comments – there’s ‘only’ about 400K actual statements), and so the challenge of maintaining control over technical debt is one I think about on a daily basis.

What I’ve also noticed is an ever-increasing number blogs, books, conference sessions and user group talks tackling this subject. It seems lots of us are facing the same issue. But why is it that so many of us are we getting ourselves into such difficulties? What’s causing all this technical debt? I’ve got some ideas, which you might find a little surprising…

1. Computers are too Powerful

In my first programming job I wrote a C application to run on a handheld barcode scanner. My development machine was a green-screen 286 laptop, which I dubbed “the helicopter”, due to the racket its fan made. The application consisted of about 20 text-based screens, some “database” management, and some serial port communications code. If I made a change that required a full rebuild, I’d go for a walk, because it would take my development machine a couple of hours. That application had 9,423 lines of code (4,465 statements). I couldn’t use that machine to build a one million line system even if I wanted to. The build would take three years.

Compare that with the development machine I use today. It can do a clean build of my million line codebase in under 30 minutes including running the unit tests. An incremental build takes a couple of minutes. I can have the client solution (90 sub-projects) and the server solution (50 sub-projects) both running under the Visual Studio debugger, with CPU and RAM to spare. All one million lines of code are in a single source code repository, and our version control system barely breaks a sweat.

What’s more the end users’ computers have more than enough power. Our client and server can run quite happily on low-end hardware. We could probably add another million lines of code before we ran into serious memory usage issues.

But why is this a problem? Hasn’t Moore’s law been absolutely brilliant for the software development world? Well yes and no. It allows us to do things that were simply beyond the realms of possibility only a decade ago. We can create bigger and more powerful systems than ever before. And that’s exactly what we’re doing.

But imagine for a moment if Visual Studio only allowed a maximum of 10 projects in a solution. Imagine your version control system could cope with a maximum of 10000 lines of code. Imagine your compiler supported a maximum of 100 input files. What would you do?

What you’d do is modularise. You’d be forced to use vertical layering as well as just horizontal layering. You’d seriously consider a micro-service architecture. In short you wouldn’t be able to create a monolith. You’d have to create lightweight loosely coupled modules instead.

The bigger a codebase is, the more technical debt it contains. And the power of modern computers makes it far too easy to create gigantic monolithic codebases.

I’m not complaining that we need less powerful computers. We just need the self-discipline to know when to stop adding features to a single component. Because the computer isn’t going to stop us until we’ve gone way too far.

2. Frameworks are too Powerful

For me, it started with VB6, and .NET took it to a whole new level. It was called “RAD” at the time – “rapid application development”. Things that would take weeks with the previous generation of technologies could be done in days. In just a few lines of code you could express what used to take hundreds of lines.

Now in theory, this is good for technical debt. Fewer lines of code means more maintainable code right? Well yes and no.

The rise of all-encompassing frameworks like the .NET FCL, and all the myriad open source libraries that augment it, mean that our code is much denser than before. In 20 lines of code using modern languages and frameworks, we can do far more than we could achieve using 20 lines of 1990 code. This means that a modern 10KLOC codebase will have many more features than a 10KLOC codebase from 20 years ago.

So not only has the growth in computing power meant we can write more lines of code than were possible before, but the growth in framework capabilities has meant we can achieve more line for line than ever before.

How is this a problem? If a feature that would take 1000 lines of code to implement without a powerful framework can be implemented in 100 using a framework, doesn’t that make the code more maintainable? The trouble is that we can’t properly reason about code that uses a framework if we know nothing about that framework.

So if I need to debug a database issue and the code uses Entity Framework Code First, I’ll need to learn about Entity Framework to understand what is going on. And in a one million line codebase, there will be quite a lot of frameworks, libraries and technologies that I find myself needing to understand.

It’s not uncommon to find several competing frameworks co-exist within the same monolithic codebase, so there’s Entity Framework, but there’s also NHibernate, and Linq2Sql and some raw ADO.NET and maybe some Simple.Data too. There’s XML serialization, Binary serialization and JSON serialization. There’s a WCF service, a .NET remoting service, and a WebAPI service. There might be some custom frameworks that were created specifically for this codebase, but their documentation is poor or non-existent.

The bigger the codebase, the more frameworks and libraries you will need to learn in order to make sense of what’s going on. And unless you understand these frameworks, it will seem like things are happening by magic. You know the system’s doing it, but you have no idea where to put the breakpoint to debug it. And you have no idea which line of code you need to modify to fix it.

I’m certainly not complaining that we need to stop using frameworks or open source libraries. I’m not even suggesting that we need to standardise on one single framework and enforce it right through the codebase. What I am suggesting once again is that if our large systems were composed out of much smaller pieces, rather than being monoliths, the number of frameworks you’d need to learn in order to understand the code would be greatly reduced. And if you didn’t like their choice of framework, you’d have the option to rewrite that component with a better one.

3. Programming is too Easy

My final reason why we are finding it so easy to create mountains of technical debt will perhaps be the most controversial of the three. It was inspired by Uncle Bob Martin’s recent article “Hordes of Novices”. In his article he laments the fact that the software industry is flooded with novice developers writing bad code. We’d get more done, he argues, with fewer programmers writing better code. And I agree with him. He questions whether we do in fact really need more code to be written…

Or do we want less code? Less code that does more. Much less code, written much better, doing much, much more?

But why is the software industry letting novices loose on their code? Because programming is too easy. Give a junior developer a set of clear requirements, and eventually they’ll produce some code that (sort of) does what was requested of them. They might take longer than the senior developer, but they will finish.

And so from management’s perspective, adding more programmers appeared to work. We sneaked in a few extra features that the senior developers didn’t have time for. The junior developers work slower, but they’re paid less, so it all balances out in the end.

But what we know is that there is more difference between the novice and senior programmers’ code than just how long it took to write. For starters, the novice programmer will create more lines of code, often by factor of 10 or more. So maintainability will be an issue. Also, the novice programmer will typically have more bugs in their code. For sure, the testers will have found the obvious ones. But I’m talking about time-bomb bugs, accidents waiting to happen, like storing the date-time in local time rather than UTC. When the clocks go back and the system falls over, it will be the senior developers sorting out the mess.

And that’s not to mention areas like performance and error handling, which novices typically do a poor job at. Or simply the way they chose to implement the feature. Very often a novice will make code changes in all the wrong places, with cut and paste coding instead of isolating the change, or injecting special knowledge into parts of the system that are supposed to be generic. Their unmaintainable code sprawls right across the system, making it extremely difficult to undo the mess.

Of all my three reasons for technical debt, this was the one I least wanted to write about, because it seems mean-spirited and ego-centric. “I’m the senior developer and I my code is flawless – it’s you novices that are messing things up and causing the technical debt problem”. But the blame game doesn’t get us anywhere. They may be novices, but usually they are conscientious and professional. They’re doing their best, and learning as they go.

In any case, what would the solution be? Hire no novices? Where will the next generation of master craftsmen come from? Once again I think the answer comes in the form of composing a large system out of much smaller loosely-coupled and easily replaceable components. If you must be quick to market, by all means let a junior developer make a small (1000 line of code) module. But if it turns out to be buggy or unmaintainable, then let the senior developer delete the whole thing and code it again from scratch. The novices who progress to the next level are the ones whose components don’t need to be thrown away.

The bigger the codebase, the more code it will contain which was written in sub-optimal way, often by novices. But in a big codebase, getting rid of bad code is extremely difficult. So if you must let novices write code, isolate their work in small replaceable modules, and don’t be afraid to throw them away at the first sign of trouble.

In fact, this applies equally to senior developers. If everyone is creating small replaceable components, technical debt can be tackled much more effectively.

Conclusion

The reason so many of us are deep in technical debt is it’s become far too easy to rack up a huge amount of debt before you realise you’re in trouble. Our computers are so powerful they don’t warn us that we’re trying to do too much. New frameworks allow us to write succinct and powerful code, but each one adds yet one more item to the list of things you must understand in order to work on the codebase. And you can hire a dozen novice programmers and two months later add a dozen shiny new features to your codebase. But the mess that is left behind will probably never be fully cleaned up.

What’s the solution? Well I’ve said it three times already, so it won’t hurt to say it again. The more I think about technical debt, the more convinced I become that the solution is to compose large systems out of small, loosely-coupled and replaceable parts. Refactoring is one way to eliminate technical debt, but it is often too slow, and barely covers interest payments. Much better to be able to throw code into the bin, and write it again the right way. But with a monolith, that’s simply not possible.

Disagree? Tell me why in the comments…

Saturday 16 November 2013

How to Convert a Mercurial Repository to Git on Windows

There are various guides on the internet to converting a Mercurial repository into a git one, but I found that they tended to assume you had certain things installed that might not be there on a Windows PC. So here’s how I did it, with TortoiseHg installed for Mercurial, and using the version of git that comes with GitHub for Windows. (both hg and git need to be in your path to run the commands shown here).

Step 1 – Clone hg-git

hg-git is a Mercurial extension. This seems to be the official repository. Clone it locally:

hg clone https://bitbucket.org/durin42/hg-git

Step 2 - Add hg-git as an extension

You now need to add hg-git as a mercurial extension. This can either be done by editing the mercurial.ini file that TortoiseHg puts in your user folder, or just enable it for this one repository, by editing (or creating) the hgrc file in the .hg folder and adding the following configuration

[extensions]
hggit = c:/users/mark/code/temp/hg-git/hggit

Step 3 – Create a bare git repo to convert into

You now need a git repository to convert into. If you already have one created on GitHub or BitBucket, then this step is unnecessary. But if you want to push to a local git repository, then it needs to be a “bare” repo, or you’ll get this error: “abort: git remote error: refs/heads/master failed to update”. Here’s how you make a bare git repository:

git init --bare git_repo

Step 4 – Push to Git from Mercurial

Navigate to the mercurial repository you wish to convert. The hg-git documentation says you need to run the following one-time configuration:

hg bookmarks hg

And having done that, you can now push to your git repository, with the following simple command:

hg push path\to\git_repo

You should see that all your commits have been pushed to git, and you can navigate into the bare repository folder and do a git log to make sure it worked.

Step 5 – Get a non-bare git repository

If you followed step 3 and converted into a bare repository, you might want to convert to a regular git repository. Do that by cloning it again:

git clone git_bare_repo git_regular_repo

Thursday 7 November 2013

Thoughts on using String Object Dictionary for DTOs in C#

When you have a large enterprise system, you end up with a very large number of data transfer objects / business entities that need to get persisted into databases, and serialized over various network interfaces. In C#, these DTOs will usually be represented as strongly typed classes. And its not uncommon to have an inheritance hierarchy of DTOs, with many different “types” of a particular entity needing to be supported.

For the most part, using strongly typed DTOs in C# just works, but as a system grows over time, making changes to these objects or introducing new ones can become very painful. Each change will result in database schema updates, and if cross-version serialization and deserialization is required (where a DTO which was serialized in one version of your application needs to be deserialized in another), could potentially break the ability to load in legacy data.

Here’s a rather contrived example, for a system that needs to let the user configure “Storage Devices”. Several different types of storage device need to be supported, and each one has its own unique properties. Here’s some classes that might be created in C# for such a system:

class StorageDevice { 
    public int Id { get; set; } 
    public string Name { get; set; }
}

class NetworkShare : StorageDevice {
    public string Path { get; set; }
    public string LoginName { get; set; }
    public string Password { get; set; }
}

class CloudStorage : StorageDevice {
    public string ServerUri { get; set }
    public string ContainerName { get; set; }
    public int PortNumber { get; set; }
    public Guid ApiKey { get; set; }
}

These types are nice and simple, but already we run into some problems when we want to store them in a relational database. Quite often three tables will be used, one called “StorageDevices” with the ID and Name properties, and then one called “NetworkShares” linking to a storage device ID, and storing the three fields for Network Shares. Then you need to do the same for “CloudStorage”. To add a new type of storage or change an existing one in any way requires a database schema update.

Cross-version serialization is also very fragile with this approach. Your codebase can end up littered with obsolete types and properties just to avoid breaking deserialization.

This type of object hierarchy can introduce a code smell. We may well end up writing code that breaks the Liskov Substitution Principle, where we need to discover what the concrete type of a base “StorageDevice” actually is before we can do anything useful with it. This is exacerbated by the fact that developers cannot move properties from a derived type down into the base class for fear of breaking serialization.

This approach also doesn’t lend itself well to generic extensibility. What if we wanted third parties to be able to extend our application to support new types of StorageDevice, with our code agnostic to what the concrete implementation type actually is? As it stands, those new types would need their own new database table to be stored in, and it would be very hard to write generic code that allowed configuration of those objects.

The String-Object Dictionary

A potential solution to this problem is to replace the entire inheritance hierarchy with a simple string-object dictionary:

class StorageDevice {         
    public IDictionary<string, object> Properties { get; set; }
}

The idea behind this approach is that now we never need to modify this type again. It can store any arbitrary properties, and we don’t need to create derived types. It is basically a poor man’s dynamic object, and in theory in C# you could even just use an ExpandoObject. However, having a proper type opens the door to creating extension methods that simplify getting certain key properties in and out of the dictionary. This can mitigate the biggest weakness of this approach, which is losing type safety on the properties of the object.

These objects are more robust against version changes. You can tell that an object comes from a previous version of your system by what keys are and aren’t present in the dictionary (you could even have a version property if you wanted to be really safe), and do any conversions as appropriate. And you can successfully use objects from future versions of your system so long as the properties you need to work with are present.

Persisting these objects to a database presents something of a challenge, since you’d need to store an arbitrary object into a single field. And that object could itself be a list of objects, or an object of arbitrary complexity. Probably JSON or XML serialization would be a good approach. In many ways, these lend themselves very well to a document database, although for legacy codebases, you may be tied into a relational database. You could still run into deserialization issues if the objects stored as values in the databases were subject to change. So those objects might also need to be string-object dictionaries too.

Other issues you might run into is deciding what to do about additional properties you want to add onto the object but not serialize. Many developers will be tempted to put extra stuff into the dictionary for convenience. One possible option would be to use namespacing on the keys. So anything with a key starting with “temp.” wouldn’t be serialized or persisted to the database for example. I’d be interested to know if anyone else has tackled this problem.

Conclusion

String object dictionaries are a powerful way of avoiding some tricky versioning issues and making your system very extensible. But in many ways they feel like trying to shoehorn a dynamic language approach into a statically typed one. I’ve noticed them cropping up in more and more places though in C# programming, such as the Katana project which uses one for its “environment dictionary”.

I think for one of the very large projects I am working on at the moment, they could be a perfect fit, allowing us to make the system significantly more flexible and extensible than it has been in the past.

But I am actively on the lookout at the moment for any articles for or against using this technique, or any significant open source projects that are taking this approach. So let me know in the comments what you think.

I did ask a question about this on Programmers stack exchange and got the rather predictable (“that’s insane”) replies, but I think there is more mileage in this approach than is immediately apparent. In particular it is the need for generic extensibility without database schema updates, and cross-version deserialization that pushes you in this direction.

Monday 4 November 2013

Announcing “Audio Programming with NAudio”

I’m really excited to announce the release of my latest Pluralsight course “Audio Programming with NAudio”. This is the follow-on course to my introductory “Digital Audio Fundamentals” course, and is intended to give a thorough and systematic run-through of how to use all the major features of NAudio.
The modules are as follows:
  • Introducing NAudio in which I go through all the things you can find in NAudio, and explain what the demo applications do as well. This module also introduces the three base classes/interfaces for all signal chain components in NAudio (IWaveProvider, WaveStream and ISampleProvider), which are crucial to understanding how to work with NAudio.
  • Audio File Playback explains how to decide which of NAudio’s playback classes you should use, and how they are configured. I cover lots of important playback related tasks such as how to change volume, reposition, and even how to stop (yes, that can require more thought than you might imagine).
  • Working with Files deals with the various file readers in NAudio, as well as showing how to create wave files with WaveFileWriter.
  • Changing Audio Formats might seem longwinded and a bit boring, but is actually one of the most important modules in the course. It explains how to change the sample rate, bit depth and channel count of PCM audio. This is something that you need to do regularly when dealing with sampled audio, and there are lots of different approaches you can take.
  • Working with Codecs explains how you can use the ACM and MediaFoundation codecs on your computer, and also gives a couple of other techniques for working with codecs that you might find helpful.
  • Recording Audio shows how to use the various NAudio recording classes, including loopback capture, and a brief demo of high performance low level capture using ASIO.
  • Visualizations is not really about NAudio, but gives some useful strategies for creating peak meters, waveform displays and spectrum analyzers in both WinForms and WPF. I included it because this is something lots of NAudio users wanted to know how to do.
  • Mixing and Effects is probably my favourite module in the course. To show how to do mixing and effects I create a software piano, and turn it into a software synthesizer.
  • Audio Streaming covers another topic that NAudio doesn’t directly address, but that lots of NAudio users are interested in. This module shows how you can implement playback of streaming media, and how you would go about creating a network chat application.
I really hope that everyone planning to create a serious application with NAudio will take the time to watch this course (and the previous one if necessary). You’ll find you will be a lot more productive once you know the basics of digital audio, and understand the design philosophy behind the NAudio library.
I know Pluralsight is not free, but receiving some financial compensation for the time I put into this course enabled me to spend a lot more time on it than I would have been able to otherwise. The Pluralsight library is ridiculously well stocked with courses on all kinds of development technologies and practices, so you can’t fail to get good value for the monthly subscription fee. Having said that, please be assured that I remain committed to provide good training materials on NAudio, and I plan to keep blogging and updating the main site with more documentation.
I have actually reached the point with NAudio that I am getting far more requests for help than I can keep up with. I’m averaging 10 questions/emails per day, and so if I miss even a few days I get hopelessly behind. If I’ve failed to answer your question, please accept my apologies, and part of the goal in creating this course is having something I can point people to. So if I answer your question with a link to this course, please don’t be offended.

Sunday 3 November 2013

Finding Prime Factors in F#

After having seen a few presentations on F# over the last few months, I’ve been looking for some really simple problems to get me started, and this code golf question inspired me to see if I could factorize a number in F#. I started off by doing it in C#, taking the same approach as the answers on the code golf site (which are optimised for code brevity, not performance):

var n = 502978;
for (var i = 2; i <= n; i++)
{
    while (n%i < 1)
    {
        Console.WriteLine(i);
        n /= i;
    }
}

Obviously it would be possible to try to simply port this directly to F#, but it felt wrong to do so, because there are two mutable variables in this approach (i and n). I started wondering if there was a more functional way to do this working entirely with immutable types.

The algorithm we have starts by testing if 2 is a factor of n. If it is, it divides n by 2 and tries again. Once we’ve divided out all the factors of 2, we increment the test number and repeat the process. Eventually we get down to to the final factor when i equals n.

So the F# approach I came up with, uses a recursive function. We pass in the number to be factorised, the potential factor to test (so we start with 2), and an (immutable) list of factors found so far, which starts off as an empty list. Whenever we find a factor, we create a new immutable list with the old factors as a tail, and call ourselves again. This means n doesn’t have to be mutable – we simply pass in n divided by the factor we just found. Here’s the F# code:

let rec f n x a = 
    if x = n then
        x::a
    elif n % x = 0 then 
        f (n/x) x (x::a)
    else
        f n (x+1) a
let factorise n = f n 2 []
let factors = factorise 502978

The main challenge to get this working was figuring out where I needed to put brackets (and remembering not to use commas between arguments). You’ll also notice I created a factorise function, saving me passing in the initial values of 2 and an empty list. This is one of F#’s real strengths, making it easy to combine functions like this.

Obviously there are performance improvements that could be made to this, and I would also like at some point to work out how to make a non-recursive version of this that still uses immutable values. But at least I now have my first working F# “program”, so I’m a little better prepared for the forthcoming F# Tutorial night with Phil Trelford at devsouthcoast.

If you’re an F# guru, feel free to tell me in the comments how I ought to have solved this.