Wednesday 31 October 2012

Upgrading family PCs to Windows 8

I have five children, and with only three PCs in the house, it can be a challenge to get access to one. Probably the time to buy another has come. However, managing accounts on three PCs for seven people is hardly fun, and a fourth would just add to the workload. So the idea of upgrading to Windows 8 with its accounts that can sync their settings across PCs appeals to me. I decided this week to take advantage of the £25 upgrade offer and set to work on upgrading.

Upgrade Assistant

Normally when I upgrade to a new version of Windows, I will do a fresh install. But the thought of having to recreate seven user accounts and set them all up with their preferences was not appealing, so I opted for an upgrade, keeping all apps and settings. There is also an upgrade that only keeps settings, but I presume that would mean that programs like Office, iTunes, CrashPlan, DropBox etc would get lost along the way.

The upgrade assistant examines your system and warns you about issues you might face. This is a nice touch and it warned me that I needed to uninstall Microsoft Security Essentials and that there might be a problem with VS2010 SP1. It also told me I needed to deauthorize my computer on iTunes, which it didn’t explain, but this turns out to be a fairly simple task (my wife uses iTunes but I don’t so I’m not too familiar with the interface). It warns that Windows 8 doesn’t come with DVD playback as standard, although there does seem to be a rather convoluted way to add Windows Media Center for free if you take up the upgrade offer. The one thing the upgrade assistant forgot to tell me was to uninstall Windows Live Family Safety first, which I should have done as I want to use with Windows 8 family safety settings.

It was a little irritating that the upgrade assistant insists on downloading Windows on every machine you upgrade. It is also worth noting that it won’t give you the option to go from x86 to x64, although for my two 32 bit Windows 7 machines I had I have no real need to give them more than 4GB RAM, so it is no big deal.

All told, the installation took around 2 hours. It backs up the old installation to a folder called Windows.old, which can be deleted if you are sure the transition went correctly and none of your data is lost.

Windows 8 Usability

There has been a lot of fuss in the media about the loss of the start menu and that the touch-centric design of the Start screen and Metro apps would be confusing for new users, so I was interested to see how my wife and children got on with it. The answer is, just fine. They picked it up incredibly quickly. Within an hour or so they knew how to get to the start screen, how to sign in and out, how to change their account picture and colour scheme (most important to them!), how to organize live tiles, and how to install stuff from the store. Probably they don’t know how to use the charms or search for stuff, but they haven’t really needed to do that so far.

I also have to say that I have been won over to the new start screen despite my scepticism. It is a much better version of the start menu, with the ability to organize and pin stuff, better searching, and the live tiles are great for things like calendar and email notifications (as on my Windows Phone).

There were however a few examples of poor usability I encountered. I’m not sure why the search doesn’t automatically show you results in other categories if your selected category has no results. If downloading apps and updates stalls, you get very little feedback as to what is going on. I’d like to see bytes downloaded and remaining to help me troubleshoot. I occasionally got stuck in certain screens, like the store updates screen not letting me go back to the store front page, or the user settings screen not letting me out until I converted an account from Microsoft to local. Hopefully Microsoft are planning to fix a lot of these types of annoyances in updates soon.

Family PC

For my family PCs, there are two things I want. First, to be able to easily control what my children can access, and second, to sync as many settings between the PCs as possible to avoid having to manually configure accounts on each machine.

The first is well handled by Windows 8 family safety, which is essentially an improved Windows Live family safety. Its web filtering lets you choose by category and then add entries to a blacklist or whitelist. You can also control Windows Store apps by rating, but there doesn’t seem to be a blacklist or whitelist for apps, which is a great shame, as some apps I wanted to allow my younger children access to (e.g. OneNote), had a 12+ rating.

Family safety also has settings to control how many hours a child can be on the PC, and select times of day when they can use the computer. This is great, as we can stop them using the PC on weekdays from 8:00-8:30 when they should be getting ready for school. We can also limit them to 3 hours a day on the computer, and since the family safety website lets you link up local accounts, this should work even if they switch between computers during the day.

To get the benefit of syncing settings between PCs, you have to upgrade from a local account to a Microsoft account. I did this for my wife and eldest child who both have hotmail accounts. But it is a little less clear what Microsoft expect me to do for younger children. It would be nice if I could somehow enable their accounts for syncing but manage them via my Microsoft account.

The syncing is a little confusing. It wasn’t clear to me what exactly would be synced. For example, I was expecting my calendar settings to sync, but they didn’t seem to. There is no way to tell if settings sync has completed or not, so maybe I just needed to wait a bit longer. It also appears that installing a Windows store app on one PC does not automatically install it on the others, so it would at be nice if the Windows Store had a “my apps” area showing me apps that I had installed on at least one of my computers.

The next step is to take my development laptop through the same procedure, and then I can get to grips with seeing how much of NAudio will work with Windows RT.

Friday 26 October 2012

NAudio 1.6 Release Notes (10th Anniversary Edition!)

I’ve decided it’s time to release NAudio 1.6, as there are a whole load of fixes, features and improvements that have been added since I released NAudio 1.5 which I want to make available to a wider audience (if you’ve been downloading the preview releases on NuGet then you’re already more or less up to date). This marks something of a milestone for the project as it was around this time in 2002 that I first started working on NAudio, using a very early build of SharpDevelop and compiling against .NET 1.0. Some of the code I wrote back then is still in there (the oldest file is MixerInterop.cs, created on 9th December 2002).

NAudio 1.6 can be downloaded from NuGet or CodePlex.

What’s new in NAudio 1.6?

  • WASAPI Loopback Capture allowing you to record what your soundcard is playing (only works on Vista and above)
  • ASIO Recording ASIO doesn’t quite fit with the IWaveIn model used elsewhere in NAudio, so this is implemented in its own special way, with direct access to buffers or easy access to converted samples for most common ASIO configurations. Read more about it here.
  • MultiplexingWaveProvider and MultiplexingSampleProvider allowing easier handling of multi-channel audio. Read more about it here.
  • FadeInOutSampleProvider simplifying the process of fading audio in and out
  • WaveInEvent for more reliable recording on a background thread
  • PlaybackStopped and RecordingStopped events now include an exception. This is very useful for cases when USB audio devices are removed during playback or record. Now there is no unhandled exception and you can detect this has happened by looking at the EventArgs. (n.b. I’m not sure if adding a property to an EventArgs is a breaking change – recompile your code against NAudio 1.6 to be safe).
  • MixingWaveProvider32 for cases when you don’t need the overhead of WaveMixerStream. MixingSampleProvider should be preferred going forwards though.
  • OffsetSampleProvider allows you to delay a stream, skip over part of it, truncate it, and append silence. Read about it here.
  • Added a Readme file to recognise contributors to the project. I’ve tried to include everyone, but probably many are missing, so get in touch if you’re name’s not on the list.
  • Some code tidyup (deleting old classes, some namespace changes. n.b. these are breaking changes if you used these parts of the library, but most users will not notice). This includes retiring WaveOutThreadSafe which was never finished anyway, and WaveOutEvent is preferred to using WaveOut with function callbacks in any case.
  • NuGet package and CodePlex download now use the release build (No more Debug.Asserts if you forget to dispose stuff)
  • Lots of bugfixes, including a concerted effort to close off as many issues in the CodePlex issue tracker as possible.
  • Fix to GSM encoding
  • ID3v2 Tag Creation
  • ASIO multi-channel playback improvements
  • MP3 decoder now flushes on reposition, fixing a potential issue with leftover sound playing when you stop, reposition and then play again.
  • MP3FileReader allows pluggable frame decoders, allowing you to choose the DMO one, or use a fully managed decoder (hopefully more news on this in the near future)
  • WMA Nuget Package (NAudio.Wma) for playing WMA files. Download here.
  • RF64 read support
  • For the full history, you can read the commit notes on CodePlex.

A big thanks to everyone who has contributed bug fixes, features, bug reports, and even a few donations this year. To date NAudio 1.5 has been downloaded 34,213 times from CodePlex and 3,539 times on NuGet. I’ll be continuing to upload pre-release versions on NuGet, so check for the latest builds here.

What’s coming up next?

I announced last release that I would finally be moving from .NET 2.0 to 3.5, and was persuaded to delay the move. However, this time I will be upgrading the project. The main reason is to enable extension methods (I know there are hacky ways to do this in .NET 2.0). With extension methods I can make the new ISampleProvider interface much easier to use, and it will become a more prominent part of NAudio. I have some nice ideas for a fluent interface for NAudio, allowing you to construct your audio pipeline much more elegantly.

I also have plans to move my development environment over to Windows 8 in the very near future, and a WinRT version of NAudio is on my priority list. I have already implemented fully managed MP3 decoding for Windows RT, and hope to release that as an open source project soon.

There are lots of other features on my todo list for NAudio. One of the big drivers behind the ISampleProvider interface is my desire to make audio effects easier to implement, so I’m hoping to get a collection of audio effects in the next version. I’ve also got a managed resampler which is almost working, but wasn’t quite ready to go in to NAudio 1.6.

Anyway, hope you find NAudio useful. Do let me know what cool things you have made with it, and I’ll link to you on the NAudio home page.

Friday 19 October 2012

Understanding and Avoiding Memory Leaks with Event Handlers and Event Aggregators

If you subscribe to an event in C# and forget to unsubscribe, does it cause a memory leak? Always? Never? Or only in special circumstances? Maybe we should make it our practice to always unsubscribe just in case there is a problem. But then again, the Visual Studio designer generated code doesn’t bother to unsubscribe, so surely that means it doesn’t matter?.

updater.Finished += new EventHandler(OnUpdaterFinished);
updater.Begin();

...

// is this important? do we have to unsubscribe?
updater.Finished -= new EventHandler(OnUpdaterFinished);

Fortunately it is quite easy to see for ourselves whether any memory is leaked when we forget to unsubscribe. Let’s create a simple Windows Forms application that creates lots of objects, and subscribe to an event on each of the objects, without bothering to unsubscribe. To make life easier for ourselves, we’ll keep count of how many get created, and how many get deleted by the garbage collector, by reducing a count in their finalizer, which the garbage collector will call.

Here’s the object we’ll be creating lots of instances of:

public class ShortLivedEventRaiser
{
    public static int Count;
    
    public event EventHandler OnSomething;

    public ShortLivedEventRaiser()
    {
        Interlocked.Increment(ref Count);
    }

    protected void RaiseOnSomething(EventArgs e)
    {
        EventHandler handler = OnSomething;
        if (handler != null) handler(this, e);
    }

    ~ShortLivedEventRaiser()
    {
        Interlocked.Decrement(ref Count);
    }
}

and here’s the code we’ll use to test it:

private void OnSubscribeToShortlivedObjectsClick(object sender, EventArgs e)
{
    int count = 10000;
    for (int n = 0; n < count; n++)
    {
        var shortlived = new ShortLivedEventRaiser();
        shortlived.OnSomething += ShortlivedOnOnSomething;
    }
    shortlivedEventRaiserCreated += count;
}

private void ShortlivedOnOnSomething(object sender, EventArgs eventArgs)
{
    // just to prove that there is no smoke and mirrors, our event handler will do something involving the form
    Text = "Got an event from a short-lived event raiser";
}

I’ve added a background timer on the form, which reports every second how many instances are still in memory. I also added a garbage collect button, to force the garbage collector to do a full collect on demand.

So we click our button a few times to create 80,000 objects, and quite soon after we see the garbage collector run and reduce the object count. It doesn’t delete all of them, but this is not because we have a memory leak. It is simply that the garbage collector doesn’t always do a full collection. If we press our garbage collect button, we’ll see that the number of objects we created drops down to 0. So no memory leaks! We didn’t unsubscribe and there was nothing to worry about.

image

But let’s try something different. Instead of subscribing to an event on my 80,000 objects, I’ll let them subscribe to an event on my Form. Now when we click our button eight times to create 80,000 of these objects, we see that the number in memory stays at 80,000. We can click the Garbage Collect button as many times as we want, and the number won’t go down. We’ve got a memory leak!

Here’s the second class:

public class ShortLivedEventSubscriber
{
    public static int Count;

    public string LatestText { get; private set; }

    public ShortLivedEventSubscriber(Control c)
    {
        Interlocked.Increment(ref Count);
        c.TextChanged += OnTextChanged;
    }

    private void OnTextChanged(object sender, EventArgs eventArgs)
    {
        LatestText = ((Control) sender).Text;
    }

    ~ShortLivedEventSubscriber()
    {
        Interlocked.Decrement(ref Count);
    }
}

and the code that creates instances of it:

private void OnShortlivedEventSubscribersClick(object sender, EventArgs e)
{
    int count = 10000;
    for (int n = 0; n < count; n++)
    {
        var shortlived2 = new ShortLivedEventSubscriber(this);
    }
    shortlivedEventSubscriberCreated += count;
}

and here’s the result:

image

So why does this leak, when the first doesn’t? The answer is that event publishers keep their subscribers alive. If the publisher is short-lived compared to the subscriber, this doesn’t matter. But if the publisher lives on for the life-time of the application, then every subscriber will also be kept alive. In our first example, the 80,000 objects were the publishers, and they were keeping the main form alive. But it didn’t matter because our main form was supposed to be still alive. But in the second example, the main form was the publisher, and it kept all 80,000 of its subscribers alive, long after we stopped caring about them.

The reason for this is that under the hood, the .NET events model is simply an implementation of the observer pattern. In the observer pattern, anyone who wants to “observe” an event registers with the class that raises the event. It keeps hold of a list of observers, allowing it to call each one in turn when the event occurs. So the observed class holds references to all its observers.

What does this mean?

The good news is that in a lot of cases, you are subscribing to an event raised by an object whose lifetime is equal or shorter than that of the subscribing class. That’s why a Windows Forms or WPF control can subscribe to events raised by child controls without the need to unsubscribe, since those child controls will not live beyond the lifetime of their container.

When it goes wrong is when you have a class that will exist for the lifetime of your application, raising events whose subscribers were supposed to be transitory. Imagine your application has a order service which allows you to submit new orders and also has an event that is raised whenever an order’s status changes.

orderService.SubmitOrder(order);
// get notified if an order status is changed
orderService.OrderStatusChanged += OnOrderStatusChanged;

Now this could well cause a memory leak, as whatever class contains the OnOrderStatusChanged event handler will be kept alive for the duration of the application run. And it will also keep alive any objects it holds references to, resulting in a potentially large memory leak. This means that if you subscribe to an event raised by a long-lived service, you must remember to unsubscribe.

What about Event Aggregators?

Event aggregators offer an alternative to traditional C# events, with the additional benefit of completely decoupling the publisher and subscribers of events. Anyone who can access the event aggregator can publish an event onto it, and it can be subscribed to from anyone else with access to the event aggregator.

But are event aggregators subject to memory leaks? Do they leak in the same way that regular event handlers do, or do the rules change? We can test this out for ourselves, using the same approach as before.

For this example, I’ll be using an extremely elegant event aggregator built by José Romaniello using Reactive Extensions. The whole thing is implemented in about a dozen of code thanks to the power of the Rx framework.

First, we’ll simulate many short-lived publishers with a single long-lived subscriber (our main form). Here’s our short-lived publisher object:

public class ShortLivedEventPublisher
{
    public static int Count;
    private readonly IEventPublisher publisher;

    public ShortLivedEventPublisher(IEventPublisher publisher)
    {
        this.publisher = publisher;
        Interlocked.Increment(ref Count);
    }

    public void PublishSomething()
    {
        publisher.Publish("Hello world");
    }

    ~ShortLivedEventPublisher()
    {
        Interlocked.Decrement(ref Count);
    }
}

And we’ll also try many short-lived subscribers with a single long-lived publisher (our main form):

public class ShortLivedEventBusSubscriber
{
    public static int Count;
    public string LatestMessage { get; private set; }

    public ShortLivedEventBusSubscriber(IEventPublisher publisher)
    {
        Interlocked.Increment(ref Count);
        publisher.GetEvent<string>().Subscribe(s => LatestMessage = s);
    }

    ~ShortLivedEventBusSubscriber()
    {
        Interlocked.Decrement(ref Count);
    }
}

What happens when we create thousands of each of these objects?

image

We have exactly the same memory leak again – publishers can be garbage collected, but subscribers are kept alive. Using an event aggregator hasn’t made the problem any better or worse. Event aggregators should be chosen for the architectural benefits they offer rather than as a way to fix your memory management problems (although as we shall see shortly, they encapsulate one possible fix).

How can I avoid memory leaks?

So how can we write event-driven code in a way that will never leak memory? There are two main approaches you can take.

1. Always remember to unsubscribe if you are a short-lived object subscribing to an event from a long-lived object. The C# language support for events is less than ideal. The C# language offers the += and -= operators for subscribing and unsubscribing, but this can be quite confusing.Here’s how you would unsubscribe from a button click handler…

button.Clicked += new EventHandler(OnButtonClicked)
...
button.Clicked –= new EventHandler(OnButtonClicked)

It’s confusing because the object we unsubscribe with is clearly a different object to the one we subscribed with, but under the hood .NET works out the right thing to do. But if you are using the lambda syntax, it is a lot less clear what goes on the right hand side of the –= (see this stack overflow question for more info). You don’t exactly want to keep trying to replicate the same lambda statement in two places.

button.Clicked += (sender, args) => MessageBox.Show(“Button was clicked”);
// how to unsubscribe?

This is where event aggregators can offer a slightly nicer experience. They will typically have an “unregister” or an “unsubscribe” method. The Rx version I used above returns an IDisposable object when you call subscribe. I like this approach as it means you can either use it in a using block, or store the returned value as a class member, and make your class Disposable too, implementing the standard .NET practice for resource cleanup and flagging up to users of your class that it needs to be disposed.

2. Use weak references. But what if you don’t trust yourself, or your fellow developers to always remember to unsubscribe? Is there another solution? The answer is yes, you can use weak references. A weak reference holds a reference to a .NET object, but allows the garbage collector to delete it if there are no other regular references to it.

The trouble is, how do you attach a weak event handler to a regular .NET event? The answer is, with great difficulty, although some clever people have come up with ingenious ways of doing this. Event aggregators have an advantage here in that they can offer weak references as a feature if wanted, hiding the complexity of working with weak references from the end user. For example, the “Messenger” class that comes with MVVM Light uses weak references.

So for my final test, I’ll make an event aggregator that uses weak references. I could try to update the Rx version, but to keep things simple, I’ll just make my own basic (and not threadsafe) event aggregator using weak references. Here’s the code:

public class WeakEventAggregator
{
    class WeakAction
    {
        private WeakReference weakReference;
        public WeakAction(object action)
        {
            weakReference = new WeakReference(action);
        }

        public bool IsAlive
        {
            get { return weakReference.IsAlive; }
        }

        public void Execute<TEvent>(TEvent param)
        {
            var action = (Action<TEvent>) weakReference.Target;
            action.Invoke(param);
        }
    }

    private readonly ConcurrentDictionary<Type, List<WeakAction>> subscriptions
        = new ConcurrentDictionary<Type, List<WeakAction>>();

    public void Subscribe<TEvent>(Action<TEvent> action)
    {
        var subscribers = subscriptions.GetOrAdd(typeof (TEvent), t => new List<WeakAction>());
        subscribers.Add(new WeakAction(action));
    }

    public void Publish<TEvent>(TEvent sampleEvent)
    {
        List<WeakAction> subscribers;
        if (subscriptions.TryGetValue(typeof(TEvent), out subscribers))
        {
            subscribers.RemoveAll(x => !x.IsAlive);
            subscribers.ForEach(x => x.Execute<TEvent>(sampleEvent));
        }
    }
}

Now let’s see if it works by creating some short-lived subscribers that subscribe to events on the WeakEventAggregator. Here are the objects, we’ll be using in this last example:

public class ShortLivedWeakEventSubscriber
{
    public static int Count;
    public string LatestMessage { get; private set; }

    public ShortLivedWeakEventSubscriber(WeakEventAggregator weakEventAggregator)
    {
        Interlocked.Increment(ref Count);
        weakEventAggregator.Subscribe<string>(OnMessageReceived);
    }

    private void OnMessageReceived(string s)
    {
        LatestMessage = s;
    }

    ~ShortLivedWeakEventSubscriber()
    {
        Interlocked.Decrement(ref Count);
    }
}

And we create another 80,000, do a garbage collect, and finally we can have event subscribers that don’t leak memory:

image

My example application is available for download on BitBucket if you want

image

Conclusion

Although many (possibly most) use cases of events do not leak memory, it is important for all .NET developers to understand the circumstances in which they might leak memory. I’m not sure there is a single “best practice” for avoiding memory leaks. In many cases, simply remembering to unsubscribe when you are finished wanting to receive messages is the right thing to do. But if you are using an event aggregator you’ll be able to take advantage of the benefits of weak references quite easily.

Monday 8 October 2012

Essential Developer Principles #3 - Don’t Repeat Yourself

You’ve probably heard of the “FizzBuzz” test, a handy way of checking whether a programmer is actually able to program. But suppose you used it to test a candidate for a programming job, asking him to perform FizzBuzz for the numbers 1-20 and he wrote the following code:

Console.WriteLine("1");
Console.WriteLine("2");
Console.WriteLine("Fizz");
Console.WriteLine("4");
Console.WriteLine("Buzz");
Console.WriteLine("Fizz");
Console.WriteLine("7");
Console.WriteLine("8");
Console.WriteLine("Fizz");
Console.WriteLine("Buzz");
Console.WriteLine("11");
Console.WriteLine("Fizz");
Console.WriteLine("13");
Console.WriteLine("14");
Console.WriteLine("FizzBuzz");
Console.WriteLine("16");
Console.WriteLine("17");
Console.WriteLine("Fizz");
Console.WriteLine("19");
Console.WriteLine("Buzz");

You would probably not be very impressed. But let’s think for a moment about what it has in its favour:

  • It works! It meets our requirements perfectly, and has no bugs.
  • It has minimal complexity. Lower than the “best” solution which uses if statements nested within a for loop. In fact it is so simple that a non-programmer could understand it and modify it without difficulty.

So why would we not want to hire a programmer whose solution was the above code? Because it is not maintainable. Changing it so that it outputs the numbers 1-100, or uses “Fuzz” and “Bizz”, or writes to a file instead of the console, all ought to be trivial changes, but with the approach above the changes become labour intensive and error-prone.

This code has simultaneously managed to lose information (it doesn’t express why certain numbers are replaced with Fizz or Buzz), and duplicate information:

  • We have a requirement that this program should write its output to the console, but that requirement is expressed not just once, but 20 times. So to change it to write to a file requires 20 existing lines to be modified.
  • We have a requirement that numbers that are a multiple of 3 print “Fizz”, but this requirement is duplicated in six places. Changing it to “Fuzz” requires us to find and modify those six lines.
  • We have a requirement that we print the output for the numbers 1 to 20. This piece of information has not been isolated to one place, so changing the program to do the numbers 10-30 requires some lines to be deleted and others changed.

All these are basic examples of violation of the “Don’t Repeat Yourself” principle, which is often stated in the following way:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

So a good solution to the FizzBuzz problem would have the following pieces of “knowledge” expressed only once in the codebase:

  • What the starting and ending numbers are (i.e. 1 and 20)
  • What the rules are for deciding which numbers to replace with special strings are (i.e. Multiples of 3, 5 with a special case for multiples of 3 and 5)
  • What the special strings are (i.e. “Fizz” and “Buzz”)
  • Where the output should be sent to (i.e. Console.WriteLine)

If any of these pieces of knowledge are duplicated, we have violated DRY and made a program that is inherently hard to maintain.

Violating DRY not only means extra work when you need to change one of the pieces of “knowledge”, it means that it is all too easy to get your program into an internally inconsistent state, where you fail to update all the instances of that piece of knowledge. So for example, if you tried to modify the program listed above so that all instances of “Fizz” became “Fuzz”, you would end up with a program that sometimes outputs “Fizz” and sometimes outputs “Fuzz” if you accidentally missed a line.

Obviously in a small application like this, you probably wouldn’t struggle too much to update all the duplicates of a single piece of knowledge, but imagine what happens when the duplication is spread across multiple classes in a large enterprise project. Then it becomes nearly impossible to keep your program in an internally consistent state. And it’s why the DRY principle is so important. Code that violates DRY is hard to maintain no matter how simple it may appear, and almost inevitably leads to internal inconsistencies and contradictions over time.

Friday 5 October 2012

Automatic Fast Feedback

In my first development job, a full compile of the software took several hours on the 286 I was working on. It meant that I had to remember not to accidentally type “nmake all” or I would waste a whole morning waiting for the thing to finish recompiling. These days of course, even vast sprawling codebases can be fully compiled in a couple of minutes at most. And we have come to expect that our IDE will give us red squiggly underlinings revealing compile errors before we even save, let alone compile.

This kind of fast feedback is invaluable for rapid development. I want to know about problems with my code as soon as possible, ideally while I am still typing the code in. The feedback doesn’t just need to be fast, it must be automatic (I shouldn’t have to explicitly ask for it), and in your face (really hard to ignore).

Unit Testing

Unit tests themselves are a form of fast feedback compared with a manual test cycle. But when I got started with unit tests, running them was a manual process. I had to remember to run the tests before checking in. If I forgot, nothing happened, because the build machine wasn’t running them. And the longer you go without running your tests, the more of them break over time until you end up with test suite that is no use to anyone anymore.

The first step towards automatic fast feedback is to get the build machine running your unit tests and failing the build if they fail. (And that build of course should be automatically triggered by checking in). Fear of breaking the build will prompt most developers to run the tests before checking in. But we can go better than this. Running tests should not be something that you have to remember to do, or wait for the build machine to do. They should be run on every local build, giving you much faster feedback that something is broken. In fact, tools like NCrunch take this a step further, running the tests even before you save your code for the ultimate in rapid feedback (it even includes code coverage and performance information).

Coding Standards & Metrics

As well as checking your code compiles and runs, it is also good to get feedback on the quality of your code. Does it apply to appropriate coding standards, such as using the right naming conventions, or avoiding common pitfalls? Is it over complicated, with methods too long and types too big? Are you using a foreach statement where a simple LINQ statement would suffice? Traditionally, this type of thing is left to a code review to be picked up. Once again the feedback cycle is too long, and by the time a problem is identified it may be considered too late to respond to i.

There are a number of tools that can automate parts of the code review process. Often these are run after the build process. Tools like FxCop (now integrated into Visual Studio) and NDepend can spot all kinds violations of coding guidelines, or over-complex code. The feedback must be hard to ignore though. I’ve found that simply running these tools doesn’t make people take notice of their output. Really, the build should break when problems are discovered with these tools, making their findings impossible to ignore.

Even better would be to review your code as you are working. I’ve been trying out ReSharper recently, and I’m impressed. It makes problems with your code very obvious while you are developing. It’s a bit of a shame that it doesn’t seem to have built in checks for high cyclomatic complexity or methods too long, although I’d imagine there is a plugin for that somewhere.

Obviously there is still a place for manual code reviews, and tools that run on the build machine, but anything that can be validated while I am in the process of coding should be. Don’t make me wait to find out what’s wrong with my code if I can know now.

UI Development

Another aspect of coding in which I want the fastest possible feedback is in UI development. That’s why we have tools like the XAML designer in Visual Studio that previews what you are making while you edit the XAML. I wonder whether even this could be taken further, with a live instance of the object you are designing running, with the ability to data bind it to any custom data on the fly. 

Conclusion

We’re seeing a lot of progress in developer tools that give you fast and automatic feedback, but I think there is still plenty of room for further innovation in this area. It is well known that the closer to development bugs are found, the quicker they are to fix. The logical implication of that is that we will go fastest when our development IDEs verify as much as is possible while we still typing the code.

I'd be interested to hear your thoughts on additional types of feedback we could get our IDE’s to give us while we are in the process of coding.