Friday, 14 December 2012

Media Foundation Support in NAudio 1.7

I’ve been working on adding Media Foundation support to NAudio 1.7 over the past few weeks. There are two reasons for this. The first is that Windows Store apps can use Media Foundation but cannot use ACM or DMO, which were the two codec APIs that NAudio did support. This was the impetus I needed to finally get round to wrapping Media Foundation API, having been put off by the thought of wrapping yet another large COM-based API.

The second is simply that Media Foundation is the future for audio codecs in Windows, and includes some codec support that ACM doesn’t offer, such as AAC encode and decode. Windows 8 even comes with an MP3 encoder.

There was a lot of interop code required to get Media Foundation working, and with the help of a few people (most notably ManU from the Codeplex forums). I have now done enough to enable the three main uses of Media Foundation – encoding, decoding and resampling.

NAudio 1.7 will have the following three main classes to support Media Foundation:

  • MediaFoundationReader this implements WaveStream and basically allows you to play anything that Media Foundation can play. This means MP3, AAC, WMA, WAV, and includes streaming from the internet. It can even pull the audio out of video files. It may be that this class becomes the primary way of playing audio for NAudio going forwards. The output of MediaFoundationReader will always be PCM, so no second converter step is required. It also tries to hide the very awkward problem of COM apartment state issues from you by (optionally) recreating the Media Foundation source reader in the first call to Read (as that might come from an MTAThread). It uses Media Foundation’s support for repositioning which so far looks pretty good (although it might not get to exactly the point you asked for), and can even reposition in MP3 files you are downloading from the internet.
  • MediaFoundationEncoder I wanted to make encoding as simple as possible, and I’m quite pleased with the API I came up with (you can read a bit about it here). This class includes helper methods for encoding WMA, MP3 and AAC (assuming you have the encoders), and all you need to do is supply the output filename, the PCM source stream and the desired bitrate. It is also extensible enough to let you use any other encoder you have.
  • MediaFoundationResampler The most useful of all the media foundation effects, and based on MediaFoundationTransform, which you can use to wrap other effects if needed. The resampler in Media Foundation is reasonably good quality and can also change the bit depth and channel count, making it a very useful general purpose class. This also is hugely beneficial to supporting playback and recording with WASPI in Windows Store applications since the DMO interface which the existing WASAPI support uses is not allowed.

I’m also working on adding Windows Store support for . The main difference is the way you read and write files in Windows Store apps. Currently I’ve got a derived MediaFoundationReaderRT class in the demo, which allows you to open files from an IRandomAccessStream. I’ll probably do a similar thing for the encoder class as well.

I think the code can still be optimised a bit, particularly in the way that Media Buffers are created during resampling, but I am actually very close to completion, and I think this is going to be a fantastic feature for the next version of NAudio. If you want to try it out, you can build the latest NAudio from code yourself, or lookout for preview builds of NAudio 1.7 on Nuget. The NAudio WPF Demo app includes demonstrations of using all three of the main NAudio Media Foundation classes, plus how to enumerate the Media Foundation codecs.

Friday, 23 November 2012

Enabling NAudio for Windows 8 Store Apps–First Steps

One of my goals for NAudio 1.7 is to have a version available for Windows Store apps. Obviously there are a lot of classes in NAudio that simply won’t work with Windows Store apps, but I have been pleasantly surprised to discover that the bulk of the WASAPI and Media Foundation APIs are allowed. ACM, and all the rest of the old MME functions (waveIn.., waveOut…) are obviously not available. I’m not entirely sure what the status of DirectX Media Objects is (DMOs), but I suspect they are not available.

The first step was simply to create a Windows Store class library and see how much of the existing code I could move across. Here’s some notes on classes that I couldn’t move across

  • WaveFormatCustomMarshaller - not supported because there is no support for System.Runtime.InteropServices.ICustomMarshaller. This is a bit of a shame, but not a huge loss.
  • ProgressLog and FileAssociations in the Utils folder probably should have been kicked out of the NAudio DLL a long time ago. I’ll mark them as Obsolete
  • Some of the DMO interfaces were marked with System.Security.SuppressUnmanagedCodeSecurity. I can’t remember why I needed to do this. It may be irrelevant if Windows Store apps can’t use DMO. I’ve simply allowed the code to compile by hiding this attribute with #if !NETFX_CORE
  • One really annoying thing is that the Guid constructor has subtly changed, meaning that you can’t pass in unsigned int and shorts. It means that I had to put unchecked casts to short or int on lots of them
  • One apparent oversight is that COMException no longer has an error code property. I guess it might be available in the exception data dictionary. It was only needed for DMO so again it may not matter
  • The ApplicationException class has gone away, so I’ve replaced all instances of it with more appropriate exception types (usually InvalidDataException or ArgumentException)
  • The fact that there is no more GetSafeHandle on wait handles means that I will need to rework the WASAPI code to use CreateEventEx.
  • I’ve not bothered to bring across the Cakewalk drum map or sfz support. Both can probably be obsoleted from NAudio.
  • The AcmMp3FrameDecompressor is not supported, and I suspect that Media Foundation will become the main way to decode MP3s (with the other option being fully managed decoders for which I have a working prototype – watch this space)
  • Encoding.ASCIIEncoding is no longer present. Quite a bit of my code uses it, and I’ve switched to UTF8 for now even though it it is not strictly correct. I’ll probably have to make my own byte encoding utility for legacy file formats. Also Encoding.GetString has lost the overload that takes one parameter.
  • I had some very old code still using ArrayList removing it had some knock-on effects throughout the SoundFont classes (which I suspect very few people actually use).
  • WaveFileChunkReader will have to wait until RiffChunk gets rewritten to not depend on mmioToFourCC
  • Everything in the GUI namespace is WindowsForms and won’t come across
  • The Midi namespace I have left out for now. The classes for the events should move across, and the file reader writer will need reworking for Windows 8 file APIs. I don’t think windows store apps have any support for actual MIDI devices unfortunately.
  • The old Mixer API is not supported at all in Win 8. The WASAPI APIs will give some control over stream volumes.
  • ASIO – I’m assuming ASIO is not supported at all in Windows Store apps
  • The Compression folder has all the ACM stuff. None of this is supported in Windows Store apps.
  • The MmeInterop folder also doesn’t contain anything that is supported in Windows Store apps.
  • SampleProviders - all came across successfully. These are going to be a very important part of NAudio moving forwards
  • MediaFoundation (a new namespace), has come across successfully, and should allow converting MP3, AAC, and WMA to WAV in Windows Store apps. It will also be very useful for regular Windows apps on Vista and above. Expect more features to be added in this area in the near future..
  • WaveInputs – not much of this folder could be ported
    • WasapiCapture - needs rework to not use Thread or WaitHandle. Also I think the way you specify what device to use has changed in Windows Store apps
    • WasapiLoopbackCapture – I don’t know if Windows Store apps are going to support loopback capture, but I will try to see what is possible
    • I may revisit the IWaveIn interface, which I have never really been happy with, and come up with an IRecorder interface in the future,to make it easier to get at the samples as they are recorded (rather than just getting a byte array)
  • WaveOutputs:
    • WasapiOut – should work in Windows Store, but because it uses Thread and EventWaitHandle it needs some reworking
    • AsioOut, WaveOut, WaveOutEvent, DirectSoundOut  - not supported. For Windows Store apps, it will either be WasapiOut or possibly a new output device depending on what I find in the Windows RT API reference.
    • AiffFileWriter, CueWaveFileWriter , WaveFileWriter – all the classes that can write audio files need to be reworked as you can’t use FileStreams in Windows Store. I need to find a good approach to this that doesn’t require the Windows Store and regular .NET code to completely diverge. Suggestions welcome.
  • WaveProviders – mostly came across with a few exceptions:
    • MixingWaveProvider32 - used unsafe code, MixingSampleProvider should be preferred anyway
    • WaveRecorder – relies on WaveFileWriter which needs rework
  • WaveStream – lots of classes in this folder will need reworking for
    • WaveFileReader, AiffFileReader, AudioFileReader, CueWaveFileReader  all need to support Windows Store file APIs
    • Mp3FileReader – may be less important now we have MediaFoundationReader, but it still can be useful to have a frame by frame decode, so I’ll see if I can make a new IMp3FrameDecompressor that works in Windows Store apps.
    • RiffChunk – to be reworked
    • WaveInBuffer, WaveOutBuffer are no longer applicable (and should really be moved into the MmeInterop folder)
    • Wave32To16Stream – contains unsafe code, should be obsoleted anyway
    • WaveMixerStream32 – contains unsafe code, also should be obsoleted

So as you can see, there is plenty of work still to be done. There are a few additional tasks once I’ve got everything I wanted moved across.

  • I want to investigate all the new Media APIs (e.g transcoding) and see if NAudio can offer any value-add to using these APIs
  • Make a Windows Store demo app to show off and test what can be done. Would also like to test on a Surface device if possible (not sure if I’ll run into endian issues on ARM devices – anyone know?).
  • Update the nuget package to contain a Windows Store binary

Wednesday, 21 November 2012

How to Drag Shapes on a Canvas in WPF

I recently needed to support dragging shapes on a Canvas in WPF. There are a few detailed articles on this you can read over at CodeProject (see here and here for example). However, I just needed something very simple, so here’s a short code snippet that you can try out using my favourite prototyping tool LINQPad:

var w = new Window();
w.Width = 600;
w.Height = 400;
var c = new Canvas();

Nullable<Point> dragStart = null;

MouseButtonEventHandler mouseDown = (sender, args) => {
    var element = (UIElement)sender;
    dragStart = args.GetPosition(element); 
    element.CaptureMouse();
};
MouseButtonEventHandler mouseUp = (sender, args) => {
    var element = (UIElement)sender;
    dragStart = null; 
    element.ReleaseMouseCapture();
};
MouseEventHandler mouseMove = (sender, args) => {
    if (dragStart != null && args.LeftButton == MouseButtonState.Pressed) {    
        var element = (UIElement)sender;
        var p2 = args.GetPosition(c);
        Canvas.SetLeft(element, p2.X - dragStart.Value.X);
        Canvas.SetTop(element, p2.Y - dragStart.Value.Y);
    }
};
Action<UIElement> enableDrag = (element) => {
    element.MouseDown += mouseDown;
    element.MouseMove += mouseMove;
    element.MouseUp += mouseUp;
};
var shapes = new UIElement [] {
    new Ellipse() { Fill = Brushes.DarkKhaki, Width = 100, Height = 100 },
    new Rectangle() { Fill = Brushes.LawnGreen, Width = 200, Height = 100 },
};


foreach(var shape in shapes) {
    enableDrag(shape);
    c.Children.Add(shape);
}

w.Content = c;
w.ShowDialog();

The key is that for each draggable shape, you handle MouseDown (to begin a mouse “capture”), MouseUp (to end the mouse capture), and MouseMove (to do the move). Obviously if you need dragged objects to come to the top in the Z order, or to be able to auto-scroll as you drag, you’ll need to write a bit more code than this. The next obvious step would be to turn this into an “attached behaviour” that you can add to each object you put onto your canvas.

Tuesday, 6 November 2012

How to use Azure Blob Storage with Azure Web Sites and MVC 4

I have been building a website recently using Azure Web Site hosting and ASP.NET MVC 4. As someone who doesn’t usually do web development, there has been a lot of new stuff for me to learn. I wanted to allow website users to upload images, and store them in Azure. Azure blob storage is perfect for this, but I discovered that a lot of the tutorials assume you are using Azure “web roles” instead of Azure web sites, meaning that a lot of the instructions aren’t applicable. So this is my guide to how I got it working with Azure web sites.

Step 1 – Set up an Azure Storage Account

This is quite straightforward in the Azure portal. Just create up a storage account. You do need to provide an account name. Each storage account can have many “containers” so you can share the same storage account between several sites if you want.

Step 2 – Install the Azure SDK

This is done using the Web Platform Installer. I installed the 1.8 version for VS 2012.

Step 3 – Setup the Azure Storage Emulator

It seems that with Azure web role projects, you can configure Visual Studio to auto-launch the Azure Storage emulator, but I don’t think that option is available for regular ASP.NET MVC projects hosted on Azure web sites. The emulator is csrun.exe and it took some tracking down as Microsoft seem to move it with every version of the SDK. It needs to be run with the /devstore comand line parameter:

C:\Program Files\Microsoft SDKs\Windows Azure\Emulator\csrun.exe /devstore

To make life easy for me, I added an option to my External Tools list in Visual Studio so I could quickly launch it. Once it starts up, a new icon appears in the system tray, giving you access to the UI, which shows you what ports it is running on:

 

image

Step 4 – Set up a Development Connection String

While we are in development, we want to use the emulator, and this requires a connection string. Again, most tutorials assume you are using an “Azure Web Role”, but for ASP.NET MVC sites, we need to go directly to our web.config and enter a new connection string ourselves. The connection string required is fairly simple:

<connectionStrings>
  <add name="StorageConnection" connectionString="UseDevelopmentStorage=true"/>
</connectionStrings>

Step 5 – Upload an image in ASP.NET MVC 4

This is probably very basic stuff to most web developers, but it took me a while to find a good tutorial. This is how to make a basic form in Razor syntax to let the user select and upload a file:

@using (Html.BeginForm("ImageUpload", "Admin", FormMethod.Post, new { enctype = "multipart/form-data" }))
{ 
    <div>Please select an image to upload</div>
    <input name="image" type="file">
    <input type="submit" value="Upload Image" />
}

And now in my AdminController’s ImageUpload method, I can access details of the uploaded file using the Request.Files accessor which returns an instance of HttpPostedFileBase :

[HttpPost]
public ActionResult ImageUpload()
{
    string path = @"D:\Temp\";

    var image = Request.Files["image"];
    if (image == null)
    {
        ViewBag.UploadMessage = "Failed to upload image";
    }
    else
    {
        ViewBag.UploadMessage = String.Format("Got image {0} of type {1} and size {2}",
            image.FileName, image.ContentType, image.ContentLength);
        // TODO: actually save the image to Azure blob storage
    }
    return View();
}

Step 6 – Add Azure references

Now we need to add a project reference to Microsoft.WindowsAzure.StorageClient, which gives us access to the Microsoft.WindowsAzure and Microsoft.WindowsAzure.StorageClient namespaces.

Step 7 – Connect to Cloud Storage Account

Most tutorials will tell you to connect to your storage account by simply passing in the name of the connection string:

var storageAccount = CloudStorageAccount.FromConfigurationSetting("StorageConnection");

However, because we are using an Azure web site and not a Web Role, this throws an exception ("SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used"). There are a few ways to fix this, but I think the simplest is to call Parse, and pass in your connection string directly:

var storageAccount = CloudStorageAccount.Parse(
    ConfigurationManager.ConnectionStrings["StorageConnection"].ConnectionString);

Step 8 – Create a Container

Our storage account can have many “containers”, so we need to provide a container name. For this example, I’ll call it “productimages” and give it public access.

blobStorage = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference("productimages");
if (container.CreateIfNotExist())
{
    // configure container for public access
    var permissions = container.GetPermissions();
    permissions.PublicAccess = BlobContainerPublicAccessType.Container;
    container.SetPermissions(permissions);
}

The name you select for your container actually has to be a valid DSN name (no capital letters, no spaces), or you’ll get a strange “One of the request inputs is out of range” error.

Note: the code I used as the basis for this part (the Introduction to Cloud Services lab from the Windows Azure Training Kit) holds the CloudBlobClient as a static variable, and has the code to initialise the container in a lock. I don’t know if this is to avoid a race condition of trying to create the container twice, or if creating a CloudBlobClient is expensive and should only be done once if possible. Other accesses to CloudBlobClient are not done within the lock, so it appears to be threadsafe.

Step 9 – Save the image to a blob

Finally we are ready to actually save our image. We need to give it a unique name, for which we will use a Guid, followed by the original extension, but you can use whatever naming strategy you like. Including the container name in the blob name here saves us an extra call to blobStorage.GetContainer. As well as naming it, we must set its ContentType (also available on our HttpPostedFileBase) and upload the data which HttpPostedFileBase makes available as a stream.

string uniqueBlobName = string.Format("productimages/image_{0}{1}", Guid.NewGuid().ToString(), Path.GetExtension(image.FileName));
CloudBlockBlob blob = blobStorage.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = image.ContentType;
blob.UploadFromStream(image.InputStream);

Note: One slightly confusing choice you must make is whether to create a block blob or a page blob. Page blobs seem to be targeted at blobs that you need random access read or write (maybe video files for example), which we don’t need for serving images, so block blob seems the best choice.

Step 10 – Finding the blob Uri

Now our image is in blob storage, but where is it? We can find out after creating it, with a call to blob.Uri:

blob.Uri.ToString();

In our Azure storage emulator environment, this returns something like:

http://127.0.0.1:10000/devstoreaccount1/productimages/image_ab16e2d7-5cec-40c9-8683-e3b9650776b3.jpg

Step 11 – Querying the container contents

How can we keep track of what we have put into the container? From within Visual Studio, in the Server Explorer tool window, there should be a node for Windows Azure Storage, which lets you see what containers and blobs are on the emulator. You can also delete blobs from there if you don’t want to do it in code.

The Azure portal has similar capabilities allowing you to manage your blob containers, view their contents, and delete blobs.

If you want to query all the blobs in your container from code, all you need is the following:

var imagesContainer = blobStorage.GetContainerReference("productimages");
var blobs = imagesContainer.ListBlobs();

Step 12 – Create the Real Connection String

So far we’ve done everything against the storage emulator. Now we need to actually connect to our Azure storage. For this we need a real connection string, which looks like this:

DefaultEndpointsProtocol=https;AccountName=YourAccountName;AccountKey=YourAccountKey

The account name is the one you entered in the first step, when you created your Azure storage account. The account key is available in the Azure Portal, by clicking the “Manage Keys” link at the bottom. If you are wondering why there are two keys, and which to use, it is simply so you can change your keys without downtime, so you can use either.

Note: most examples show DefaultEndpointsProtocol as https, which as far as I can tell, simply means that by default the Uri it returns starts with https. This doesn’t stop you getting at the same image with http. You can change this value in your connection string at any time according to your preference.

Step 13 – Create a Release Web.config transform

To make sure our live site is running against our Azure storage account, we’ll need to create a web.config transform as the Web Deploy wizard doesn’t seem to know about Azure storage accounts and so can’t offer to do this automatically like it can with SQL connection strings.

Here’s my transform in Web.Release.config:

<connectionStrings>
  <add name="StorageConnection"
    connectionString="DefaultEndpointsProtocol=https;AccountName=YourAccountName;AccountKey=YourAccountKey"
    xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
</connectionStrings>

Step 14 – Link Your Storage Account to Your Web Site

Finally, in the Azure portal, we need to ensure that our web site is allowed to access our storage account. Go to your websites, select “Links” and add a link you your Storage Account, which will set up the necessary firewall permissions.

Now you're ready to deploy your site and use Azure blob storage with an Azure Web Site.

Thursday, 1 November 2012

Using Named Branches in Mercurial

Mercurial offers a variety of approaches to branching, including “named branches”, “bookmarks” (most similar to git), “anonymous branches” and using clones. For a good comparison of these techniques, I suggest reading Steve Losh’s Guide to Mercurial Branching, which explains it well although is a little out of date now.

In this article I will walk through the process of how you can use a named branch for maintenance releases, and the workflow for both contributors and for accepting contributions. I’ll explain at the end why I’ve decided that named branches are the best option for NAudio.

Step 1: Creating an initial repository

We’ll start with a fresh repository with just two commits

hg init
// make changes
hg commit -m "first version"
// make changes
hg commit -m "version 1.0 release"

Now we’ve released version 1, let’s start work on version 2. No branches have been made yet.

// more changes
hg commit -m "beginning work on v2"

Step 2: Creating a Maintenance Branch

Now we’ve had a bug report and need to fix version 1, without shipping any of our version 2 changes. We’ll create a branch by going back to the v1.0 commit (revision 1 in our repository) and using the branch command

// go back to v1 release
hg update 1

// create a named branch
hg branch "v1.0 maintenance"

// branch will be created when we commit to it
// fix the bug
hg commit -m "bugfix for v1.0"

Step 3: Switching between branches

To get back to working on our main branch (which is called the default branch in Mercurial), we simply use the update command:

// get back onto the v2 branch:
hg update default
// make changes
hg commit -m "more work on v2"

Step 4: Making forks

Imagine at this point, we have a couple of contributors, who want to fork our repository and make some changes.

Our first contributor makes a clone, and is contributing a v2 feature, so they can simply commit to the default branch:

hg clone public-repo-address my-feature-fork
hg update default // not actually needed
hg commit -m "contributing a new feature in v2"

Our second contributor is offering a bugfix, so they must remember to switch to the named maintenance branch (they can use hg branches to see what branch names are available):

hg clone public-repo-address my-bugfix-fork
hg update "v1.0 maintenance"
hg commit -m "contributing a bugfix for v1.0"

Their commit will me marked as being on the v1.0 maintenance branch (as named branches are stored in the commits, unlike git branches which are simply pointers to commits)

Step 5: Accepting Contributions

If our contributors issued pull requests now, things would be nice and easy, but let’s imagine that more work has gone on in both branches in the meantime:

hg update default
hg commit -m "another change on v2 after the fork"

hg update "v1.0 maintenance"
hg commit -m "another v1.0 bugfix after the fork"

First, lets pull in the new v2.0 feature (n.b. it is often a good idea to use a local integration clone so that if you want to reject the contribution you can do so easily).

hg pull newfeaturefork
// need to be on the default branch to merge
hg update default
hg merge
// resolve any merge conflicts
hg commit -m "merged in the new feature"

Now we can do the same with the contribution on the maintenance branch (n.b. hg merge won’t do anything if you are still on the default branch, as it knows that the contribution is on a different branch):

hg pull bugfixfork
// get onto the branch we are merging into
hg update "v1.0 maintenance"
hg merge
hg commit -m "merged in a bugfix"

Step 6: Merging from maintenance branch into default

We have a few bugfixes now in our v1.0 branch, and we’d like to get them into v2.0 as well. How do we do that? Go to the default branch and ask to merge from the maintenance branch.

hg update default
hg merge "v1.0 maintenance"
// fix conflicts
hg commit -m "merged in v1.0 bugfixes"

And that is pretty much all you need to know to work with named branches. With your repository in this state you still have two branches (default and v1.0 maintenance) which you can continue work on separately. Here’s a screenshot of a repository which has had the steps above performed on it:

 image

Why Named branches?

I actually think that branching with clones is easier for users to understand than named branches, but for NAudio it is not a viable option. This is because CodePlex allows only one repository per project. I’d either have to create a new project for each NAudio maintenance branch, or create a fork, but both options would cause confusion.

Anonymous branches are a bad idea because you open the door to merge the wrong things together, and it’s hard to keep track of which head relates to what. Their use is mainly limited to short-lived, experimental branches.

Bookmarks are appealing as they are closest to the git philosophy, but because Mercurial requires you to explicitly ask to push them, and there can be naming collisions, I think they are best used simply as short-lived markers for local development (I might do another blog post on the workflow for this).

So with NAudio I am thinking of creating a single maintenance branch for each major release (only created when it is needed). Most people who fork can just ignore the maintenance branch and work exclusively in the default branch (I can always use hg transplant to move a fix into a maintenance branch).

Wednesday, 31 October 2012

Upgrading family PCs to Windows 8

I have five children, and with only three PCs in the house, it can be a challenge to get access to one. Probably the time to buy another has come. However, managing accounts on three PCs for seven people is hardly fun, and a fourth would just add to the workload. So the idea of upgrading to Windows 8 with its accounts that can sync their settings across PCs appeals to me. I decided this week to take advantage of the £25 upgrade offer and set to work on upgrading.

Upgrade Assistant

Normally when I upgrade to a new version of Windows, I will do a fresh install. But the thought of having to recreate seven user accounts and set them all up with their preferences was not appealing, so I opted for an upgrade, keeping all apps and settings. There is also an upgrade that only keeps settings, but I presume that would mean that programs like Office, iTunes, CrashPlan, DropBox etc would get lost along the way.

The upgrade assistant examines your system and warns you about issues you might face. This is a nice touch and it warned me that I needed to uninstall Microsoft Security Essentials and that there might be a problem with VS2010 SP1. It also told me I needed to deauthorize my computer on iTunes, which it didn’t explain, but this turns out to be a fairly simple task (my wife uses iTunes but I don’t so I’m not too familiar with the interface). It warns that Windows 8 doesn’t come with DVD playback as standard, although there does seem to be a rather convoluted way to add Windows Media Center for free if you take up the upgrade offer. The one thing the upgrade assistant forgot to tell me was to uninstall Windows Live Family Safety first, which I should have done as I want to use with Windows 8 family safety settings.

It was a little irritating that the upgrade assistant insists on downloading Windows on every machine you upgrade. It is also worth noting that it won’t give you the option to go from x86 to x64, although for my two 32 bit Windows 7 machines I had I have no real need to give them more than 4GB RAM, so it is no big deal.

All told, the installation took around 2 hours. It backs up the old installation to a folder called Windows.old, which can be deleted if you are sure the transition went correctly and none of your data is lost.

Windows 8 Usability

There has been a lot of fuss in the media about the loss of the start menu and that the touch-centric design of the Start screen and Metro apps would be confusing for new users, so I was interested to see how my wife and children got on with it. The answer is, just fine. They picked it up incredibly quickly. Within an hour or so they knew how to get to the start screen, how to sign in and out, how to change their account picture and colour scheme (most important to them!), how to organize live tiles, and how to install stuff from the store. Probably they don’t know how to use the charms or search for stuff, but they haven’t really needed to do that so far.

I also have to say that I have been won over to the new start screen despite my scepticism. It is a much better version of the start menu, with the ability to organize and pin stuff, better searching, and the live tiles are great for things like calendar and email notifications (as on my Windows Phone).

There were however a few examples of poor usability I encountered. I’m not sure why the search doesn’t automatically show you results in other categories if your selected category has no results. If downloading apps and updates stalls, you get very little feedback as to what is going on. I’d like to see bytes downloaded and remaining to help me troubleshoot. I occasionally got stuck in certain screens, like the store updates screen not letting me go back to the store front page, or the user settings screen not letting me out until I converted an account from Microsoft to local. Hopefully Microsoft are planning to fix a lot of these types of annoyances in updates soon.

Family PC

For my family PCs, there are two things I want. First, to be able to easily control what my children can access, and second, to sync as many settings between the PCs as possible to avoid having to manually configure accounts on each machine.

The first is well handled by Windows 8 family safety, which is essentially an improved Windows Live family safety. Its web filtering lets you choose by category and then add entries to a blacklist or whitelist. You can also control Windows Store apps by rating, but there doesn’t seem to be a blacklist or whitelist for apps, which is a great shame, as some apps I wanted to allow my younger children access to (e.g. OneNote), had a 12+ rating.

Family safety also has settings to control how many hours a child can be on the PC, and select times of day when they can use the computer. This is great, as we can stop them using the PC on weekdays from 8:00-8:30 when they should be getting ready for school. We can also limit them to 3 hours a day on the computer, and since the family safety website lets you link up local accounts, this should work even if they switch between computers during the day.

To get the benefit of syncing settings between PCs, you have to upgrade from a local account to a Microsoft account. I did this for my wife and eldest child who both have hotmail accounts. But it is a little less clear what Microsoft expect me to do for younger children. It would be nice if I could somehow enable their accounts for syncing but manage them via my Microsoft account.

The syncing is a little confusing. It wasn’t clear to me what exactly would be synced. For example, I was expecting my calendar settings to sync, but they didn’t seem to. There is no way to tell if settings sync has completed or not, so maybe I just needed to wait a bit longer. It also appears that installing a Windows store app on one PC does not automatically install it on the others, so it would at be nice if the Windows Store had a “my apps” area showing me apps that I had installed on at least one of my computers.

The next step is to take my development laptop through the same procedure, and then I can get to grips with seeing how much of NAudio will work with Windows RT.

Friday, 26 October 2012

NAudio 1.6 Release Notes (10th Anniversary Edition!)

I’ve decided it’s time to release NAudio 1.6, as there are a whole load of fixes, features and improvements that have been added since I released NAudio 1.5 which I want to make available to a wider audience (if you’ve been downloading the preview releases on NuGet then you’re already more or less up to date). This marks something of a milestone for the project as it was around this time in 2002 that I first started working on NAudio, using a very early build of SharpDevelop and compiling against .NET 1.0. Some of the code I wrote back then is still in there (the oldest file is MixerInterop.cs, created on 9th December 2002).

NAudio 1.6 can be downloaded from NuGet or CodePlex.

What’s new in NAudio 1.6?

  • WASAPI Loopback Capture allowing you to record what your soundcard is playing (only works on Vista and above)
  • ASIO Recording ASIO doesn’t quite fit with the IWaveIn model used elsewhere in NAudio, so this is implemented in its own special way, with direct access to buffers or easy access to converted samples for most common ASIO configurations. Read more about it here.
  • MultiplexingWaveProvider and MultiplexingSampleProvider allowing easier handling of multi-channel audio. Read more about it here.
  • FadeInOutSampleProvider simplifying the process of fading audio in and out
  • WaveInEvent for more reliable recording on a background thread
  • PlaybackStopped and RecordingStopped events now include an exception. This is very useful for cases when USB audio devices are removed during playback or record. Now there is no unhandled exception and you can detect this has happened by looking at the EventArgs. (n.b. I’m not sure if adding a property to an EventArgs is a breaking change – recompile your code against NAudio 1.6 to be safe).
  • MixingWaveProvider32 for cases when you don’t need the overhead of WaveMixerStream. MixingSampleProvider should be preferred going forwards though.
  • OffsetSampleProvider allows you to delay a stream, skip over part of it, truncate it, and append silence. Read about it here.
  • Added a Readme file to recognise contributors to the project. I’ve tried to include everyone, but probably many are missing, so get in touch if you’re name’s not on the list.
  • Some code tidyup (deleting old classes, some namespace changes. n.b. these are breaking changes if you used these parts of the library, but most users will not notice). This includes retiring WaveOutThreadSafe which was never finished anyway, and WaveOutEvent is preferred to using WaveOut with function callbacks in any case.
  • NuGet package and CodePlex download now use the release build (No more Debug.Asserts if you forget to dispose stuff)
  • Lots of bugfixes, including a concerted effort to close off as many issues in the CodePlex issue tracker as possible.
  • Fix to GSM encoding
  • ID3v2 Tag Creation
  • ASIO multi-channel playback improvements
  • MP3 decoder now flushes on reposition, fixing a potential issue with leftover sound playing when you stop, reposition and then play again.
  • MP3FileReader allows pluggable frame decoders, allowing you to choose the DMO one, or use a fully managed decoder (hopefully more news on this in the near future)
  • WMA Nuget Package (NAudio.Wma) for playing WMA files. Download here.
  • RF64 read support
  • For the full history, you can read the commit notes on CodePlex.

A big thanks to everyone who has contributed bug fixes, features, bug reports, and even a few donations this year. To date NAudio 1.5 has been downloaded 34,213 times from CodePlex and 3,539 times on NuGet. I’ll be continuing to upload pre-release versions on NuGet, so check for the latest builds here.

What’s coming up next?

I announced last release that I would finally be moving from .NET 2.0 to 3.5, and was persuaded to delay the move. However, this time I will be upgrading the project. The main reason is to enable extension methods (I know there are hacky ways to do this in .NET 2.0). With extension methods I can make the new ISampleProvider interface much easier to use, and it will become a more prominent part of NAudio. I have some nice ideas for a fluent interface for NAudio, allowing you to construct your audio pipeline much more elegantly.

I also have plans to move my development environment over to Windows 8 in the very near future, and a WinRT version of NAudio is on my priority list. I have already implemented fully managed MP3 decoding for Windows RT, and hope to release that as an open source project soon.

There are lots of other features on my todo list for NAudio. One of the big drivers behind the ISampleProvider interface is my desire to make audio effects easier to implement, so I’m hoping to get a collection of audio effects in the next version. I’ve also got a managed resampler which is almost working, but wasn’t quite ready to go in to NAudio 1.6.

Anyway, hope you find NAudio useful. Do let me know what cool things you have made with it, and I’ll link to you on the NAudio home page.

Friday, 19 October 2012

Understanding and Avoiding Memory Leaks with Event Handlers and Event Aggregators

If you subscribe to an event in C# and forget to unsubscribe, does it cause a memory leak? Always? Never? Or only in special circumstances? Maybe we should make it our practice to always unsubscribe just in case there is a problem. But then again, the Visual Studio designer generated code doesn’t bother to unsubscribe, so surely that means it doesn’t matter?.

updater.Finished += new EventHandler(OnUpdaterFinished);
updater.Begin();

...

// is this important? do we have to unsubscribe?
updater.Finished -= new EventHandler(OnUpdaterFinished);

Fortunately it is quite easy to see for ourselves whether any memory is leaked when we forget to unsubscribe. Let’s create a simple Windows Forms application that creates lots of objects, and subscribe to an event on each of the objects, without bothering to unsubscribe. To make life easier for ourselves, we’ll keep count of how many get created, and how many get deleted by the garbage collector, by reducing a count in their finalizer, which the garbage collector will call.

Here’s the object we’ll be creating lots of instances of:

public class ShortLivedEventRaiser
{
    public static int Count;
    
    public event EventHandler OnSomething;

    public ShortLivedEventRaiser()
    {
        Interlocked.Increment(ref Count);
    }

    protected void RaiseOnSomething(EventArgs e)
    {
        EventHandler handler = OnSomething;
        if (handler != null) handler(this, e);
    }

    ~ShortLivedEventRaiser()
    {
        Interlocked.Decrement(ref Count);
    }
}

and here’s the code we’ll use to test it:

private void OnSubscribeToShortlivedObjectsClick(object sender, EventArgs e)
{
    int count = 10000;
    for (int n = 0; n < count; n++)
    {
        var shortlived = new ShortLivedEventRaiser();
        shortlived.OnSomething += ShortlivedOnOnSomething;
    }
    shortlivedEventRaiserCreated += count;
}

private void ShortlivedOnOnSomething(object sender, EventArgs eventArgs)
{
    // just to prove that there is no smoke and mirrors, our event handler will do something involving the form
    Text = "Got an event from a short-lived event raiser";
}

I’ve added a background timer on the form, which reports every second how many instances are still in memory. I also added a garbage collect button, to force the garbage collector to do a full collect on demand.

So we click our button a few times to create 80,000 objects, and quite soon after we see the garbage collector run and reduce the object count. It doesn’t delete all of them, but this is not because we have a memory leak. It is simply that the garbage collector doesn’t always do a full collection. If we press our garbage collect button, we’ll see that the number of objects we created drops down to 0. So no memory leaks! We didn’t unsubscribe and there was nothing to worry about.

image

But let’s try something different. Instead of subscribing to an event on my 80,000 objects, I’ll let them subscribe to an event on my Form. Now when we click our button eight times to create 80,000 of these objects, we see that the number in memory stays at 80,000. We can click the Garbage Collect button as many times as we want, and the number won’t go down. We’ve got a memory leak!

Here’s the second class:

public class ShortLivedEventSubscriber
{
    public static int Count;

    public string LatestText { get; private set; }

    public ShortLivedEventSubscriber(Control c)
    {
        Interlocked.Increment(ref Count);
        c.TextChanged += OnTextChanged;
    }

    private void OnTextChanged(object sender, EventArgs eventArgs)
    {
        LatestText = ((Control) sender).Text;
    }

    ~ShortLivedEventSubscriber()
    {
        Interlocked.Decrement(ref Count);
    }
}

and the code that creates instances of it:

private void OnShortlivedEventSubscribersClick(object sender, EventArgs e)
{
    int count = 10000;
    for (int n = 0; n < count; n++)
    {
        var shortlived2 = new ShortLivedEventSubscriber(this);
    }
    shortlivedEventSubscriberCreated += count;
}

and here’s the result:

image

So why does this leak, when the first doesn’t? The answer is that event publishers keep their subscribers alive. If the publisher is short-lived compared to the subscriber, this doesn’t matter. But if the publisher lives on for the life-time of the application, then every subscriber will also be kept alive. In our first example, the 80,000 objects were the publishers, and they were keeping the main form alive. But it didn’t matter because our main form was supposed to be still alive. But in the second example, the main form was the publisher, and it kept all 80,000 of its subscribers alive, long after we stopped caring about them.

The reason for this is that under the hood, the .NET events model is simply an implementation of the observer pattern. In the observer pattern, anyone who wants to “observe” an event registers with the class that raises the event. It keeps hold of a list of observers, allowing it to call each one in turn when the event occurs. So the observed class holds references to all its observers.

What does this mean?

The good news is that in a lot of cases, you are subscribing to an event raised by an object whose lifetime is equal or shorter than that of the subscribing class. That’s why a Windows Forms or WPF control can subscribe to events raised by child controls without the need to unsubscribe, since those child controls will not live beyond the lifetime of their container.

When it goes wrong is when you have a class that will exist for the lifetime of your application, raising events whose subscribers were supposed to be transitory. Imagine your application has a order service which allows you to submit new orders and also has an event that is raised whenever an order’s status changes.

orderService.SubmitOrder(order);
// get notified if an order status is changed
orderService.OrderStatusChanged += OnOrderStatusChanged;

Now this could well cause a memory leak, as whatever class contains the OnOrderStatusChanged event handler will be kept alive for the duration of the application run. And it will also keep alive any objects it holds references to, resulting in a potentially large memory leak. This means that if you subscribe to an event raised by a long-lived service, you must remember to unsubscribe.

What about Event Aggregators?

Event aggregators offer an alternative to traditional C# events, with the additional benefit of completely decoupling the publisher and subscribers of events. Anyone who can access the event aggregator can publish an event onto it, and it can be subscribed to from anyone else with access to the event aggregator.

But are event aggregators subject to memory leaks? Do they leak in the same way that regular event handlers do, or do the rules change? We can test this out for ourselves, using the same approach as before.

For this example, I’ll be using an extremely elegant event aggregator built by José Romaniello using Reactive Extensions. The whole thing is implemented in about a dozen of code thanks to the power of the Rx framework.

First, we’ll simulate many short-lived publishers with a single long-lived subscriber (our main form). Here’s our short-lived publisher object:

public class ShortLivedEventPublisher
{
    public static int Count;
    private readonly IEventPublisher publisher;

    public ShortLivedEventPublisher(IEventPublisher publisher)
    {
        this.publisher = publisher;
        Interlocked.Increment(ref Count);
    }

    public void PublishSomething()
    {
        publisher.Publish("Hello world");
    }

    ~ShortLivedEventPublisher()
    {
        Interlocked.Decrement(ref Count);
    }
}

And we’ll also try many short-lived subscribers with a single long-lived publisher (our main form):

public class ShortLivedEventBusSubscriber
{
    public static int Count;
    public string LatestMessage { get; private set; }

    public ShortLivedEventBusSubscriber(IEventPublisher publisher)
    {
        Interlocked.Increment(ref Count);
        publisher.GetEvent<string>().Subscribe(s => LatestMessage = s);
    }

    ~ShortLivedEventBusSubscriber()
    {
        Interlocked.Decrement(ref Count);
    }
}

What happens when we create thousands of each of these objects?

image

We have exactly the same memory leak again – publishers can be garbage collected, but subscribers are kept alive. Using an event aggregator hasn’t made the problem any better or worse. Event aggregators should be chosen for the architectural benefits they offer rather than as a way to fix your memory management problems (although as we shall see shortly, they encapsulate one possible fix).

How can I avoid memory leaks?

So how can we write event-driven code in a way that will never leak memory? There are two main approaches you can take.

1. Always remember to unsubscribe if you are a short-lived object subscribing to an event from a long-lived object. The C# language support for events is less than ideal. The C# language offers the += and -= operators for subscribing and unsubscribing, but this can be quite confusing.Here’s how you would unsubscribe from a button click handler…

button.Clicked += new EventHandler(OnButtonClicked)
...
button.Clicked –= new EventHandler(OnButtonClicked)

It’s confusing because the object we unsubscribe with is clearly a different object to the one we subscribed with, but under the hood .NET works out the right thing to do. But if you are using the lambda syntax, it is a lot less clear what goes on the right hand side of the –= (see this stack overflow question for more info). You don’t exactly want to keep trying to replicate the same lambda statement in two places.

button.Clicked += (sender, args) => MessageBox.Show(“Button was clicked”);
// how to unsubscribe?

This is where event aggregators can offer a slightly nicer experience. They will typically have an “unregister” or an “unsubscribe” method. The Rx version I used above returns an IDisposable object when you call subscribe. I like this approach as it means you can either use it in a using block, or store the returned value as a class member, and make your class Disposable too, implementing the standard .NET practice for resource cleanup and flagging up to users of your class that it needs to be disposed.

2. Use weak references. But what if you don’t trust yourself, or your fellow developers to always remember to unsubscribe? Is there another solution? The answer is yes, you can use weak references. A weak reference holds a reference to a .NET object, but allows the garbage collector to delete it if there are no other regular references to it.

The trouble is, how do you attach a weak event handler to a regular .NET event? The answer is, with great difficulty, although some clever people have come up with ingenious ways of doing this. Event aggregators have an advantage here in that they can offer weak references as a feature if wanted, hiding the complexity of working with weak references from the end user. For example, the “Messenger” class that comes with MVVM Light uses weak references.

So for my final test, I’ll make an event aggregator that uses weak references. I could try to update the Rx version, but to keep things simple, I’ll just make my own basic (and not threadsafe) event aggregator using weak references. Here’s the code:

public class WeakEventAggregator
{
    class WeakAction
    {
        private WeakReference weakReference;
        public WeakAction(object action)
        {
            weakReference = new WeakReference(action);
        }

        public bool IsAlive
        {
            get { return weakReference.IsAlive; }
        }

        public void Execute<TEvent>(TEvent param)
        {
            var action = (Action<TEvent>) weakReference.Target;
            action.Invoke(param);
        }
    }

    private readonly ConcurrentDictionary<Type, List<WeakAction>> subscriptions
        = new ConcurrentDictionary<Type, List<WeakAction>>();

    public void Subscribe<TEvent>(Action<TEvent> action)
    {
        var subscribers = subscriptions.GetOrAdd(typeof (TEvent), t => new List<WeakAction>());
        subscribers.Add(new WeakAction(action));
    }

    public void Publish<TEvent>(TEvent sampleEvent)
    {
        List<WeakAction> subscribers;
        if (subscriptions.TryGetValue(typeof(TEvent), out subscribers))
        {
            subscribers.RemoveAll(x => !x.IsAlive);
            subscribers.ForEach(x => x.Execute<TEvent>(sampleEvent));
        }
    }
}

Now let’s see if it works by creating some short-lived subscribers that subscribe to events on the WeakEventAggregator. Here are the objects, we’ll be using in this last example:

public class ShortLivedWeakEventSubscriber
{
    public static int Count;
    public string LatestMessage { get; private set; }

    public ShortLivedWeakEventSubscriber(WeakEventAggregator weakEventAggregator)
    {
        Interlocked.Increment(ref Count);
        weakEventAggregator.Subscribe<string>(OnMessageReceived);
    }

    private void OnMessageReceived(string s)
    {
        LatestMessage = s;
    }

    ~ShortLivedWeakEventSubscriber()
    {
        Interlocked.Decrement(ref Count);
    }
}

And we create another 80,000, do a garbage collect, and finally we can have event subscribers that don’t leak memory:

image

My example application is available for download on BitBucket if you want

image

Conclusion

Although many (possibly most) use cases of events do not leak memory, it is important for all .NET developers to understand the circumstances in which they might leak memory. I’m not sure there is a single “best practice” for avoiding memory leaks. In many cases, simply remembering to unsubscribe when you are finished wanting to receive messages is the right thing to do. But if you are using an event aggregator you’ll be able to take advantage of the benefits of weak references quite easily.

Monday, 8 October 2012

Essential Developer Principles #3 - Don’t Repeat Yourself

You’ve probably heard of the “FizzBuzz” test, a handy way of checking whether a programmer is actually able to program. But suppose you used it to test a candidate for a programming job, asking him to perform FizzBuzz for the numbers 1-20 and he wrote the following code:

Console.WriteLine("1");
Console.WriteLine("2");
Console.WriteLine("Fizz");
Console.WriteLine("4");
Console.WriteLine("Buzz");
Console.WriteLine("Fizz");
Console.WriteLine("7");
Console.WriteLine("8");
Console.WriteLine("Fizz");
Console.WriteLine("Buzz");
Console.WriteLine("11");
Console.WriteLine("Fizz");
Console.WriteLine("13");
Console.WriteLine("14");
Console.WriteLine("FizzBuzz");
Console.WriteLine("16");
Console.WriteLine("17");
Console.WriteLine("Fizz");
Console.WriteLine("19");
Console.WriteLine("Buzz");

You would probably not be very impressed. But let’s think for a moment about what it has in its favour:

  • It works! It meets our requirements perfectly, and has no bugs.
  • It has minimal complexity. Lower than the “best” solution which uses if statements nested within a for loop. In fact it is so simple that a non-programmer could understand it and modify it without difficulty.

So why would we not want to hire a programmer whose solution was the above code? Because it is not maintainable. Changing it so that it outputs the numbers 1-100, or uses “Fuzz” and “Bizz”, or writes to a file instead of the console, all ought to be trivial changes, but with the approach above the changes become labour intensive and error-prone.

This code has simultaneously managed to lose information (it doesn’t express why certain numbers are replaced with Fizz or Buzz), and duplicate information:

  • We have a requirement that this program should write its output to the console, but that requirement is expressed not just once, but 20 times. So to change it to write to a file requires 20 existing lines to be modified.
  • We have a requirement that numbers that are a multiple of 3 print “Fizz”, but this requirement is duplicated in six places. Changing it to “Fuzz” requires us to find and modify those six lines.
  • We have a requirement that we print the output for the numbers 1 to 20. This piece of information has not been isolated to one place, so changing the program to do the numbers 10-30 requires some lines to be deleted and others changed.

All these are basic examples of violation of the “Don’t Repeat Yourself” principle, which is often stated in the following way:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

So a good solution to the FizzBuzz problem would have the following pieces of “knowledge” expressed only once in the codebase:

  • What the starting and ending numbers are (i.e. 1 and 20)
  • What the rules are for deciding which numbers to replace with special strings are (i.e. Multiples of 3, 5 with a special case for multiples of 3 and 5)
  • What the special strings are (i.e. “Fizz” and “Buzz”)
  • Where the output should be sent to (i.e. Console.WriteLine)

If any of these pieces of knowledge are duplicated, we have violated DRY and made a program that is inherently hard to maintain.

Violating DRY not only means extra work when you need to change one of the pieces of “knowledge”, it means that it is all too easy to get your program into an internally inconsistent state, where you fail to update all the instances of that piece of knowledge. So for example, if you tried to modify the program listed above so that all instances of “Fizz” became “Fuzz”, you would end up with a program that sometimes outputs “Fizz” and sometimes outputs “Fuzz” if you accidentally missed a line.

Obviously in a small application like this, you probably wouldn’t struggle too much to update all the duplicates of a single piece of knowledge, but imagine what happens when the duplication is spread across multiple classes in a large enterprise project. Then it becomes nearly impossible to keep your program in an internally consistent state. And it’s why the DRY principle is so important. Code that violates DRY is hard to maintain no matter how simple it may appear, and almost inevitably leads to internal inconsistencies and contradictions over time.

Friday, 5 October 2012

Automatic Fast Feedback

In my first development job, a full compile of the software took several hours on the 286 I was working on. It meant that I had to remember not to accidentally type “nmake all” or I would waste a whole morning waiting for the thing to finish recompiling. These days of course, even vast sprawling codebases can be fully compiled in a couple of minutes at most. And we have come to expect that our IDE will give us red squiggly underlinings revealing compile errors before we even save, let alone compile.

This kind of fast feedback is invaluable for rapid development. I want to know about problems with my code as soon as possible, ideally while I am still typing the code in. The feedback doesn’t just need to be fast, it must be automatic (I shouldn’t have to explicitly ask for it), and in your face (really hard to ignore).

Unit Testing

Unit tests themselves are a form of fast feedback compared with a manual test cycle. But when I got started with unit tests, running them was a manual process. I had to remember to run the tests before checking in. If I forgot, nothing happened, because the build machine wasn’t running them. And the longer you go without running your tests, the more of them break over time until you end up with test suite that is no use to anyone anymore.

The first step towards automatic fast feedback is to get the build machine running your unit tests and failing the build if they fail. (And that build of course should be automatically triggered by checking in). Fear of breaking the build will prompt most developers to run the tests before checking in. But we can go better than this. Running tests should not be something that you have to remember to do, or wait for the build machine to do. They should be run on every local build, giving you much faster feedback that something is broken. In fact, tools like NCrunch take this a step further, running the tests even before you save your code for the ultimate in rapid feedback (it even includes code coverage and performance information).

Coding Standards & Metrics

As well as checking your code compiles and runs, it is also good to get feedback on the quality of your code. Does it apply to appropriate coding standards, such as using the right naming conventions, or avoiding common pitfalls? Is it over complicated, with methods too long and types too big? Are you using a foreach statement where a simple LINQ statement would suffice? Traditionally, this type of thing is left to a code review to be picked up. Once again the feedback cycle is too long, and by the time a problem is identified it may be considered too late to respond to i.

There are a number of tools that can automate parts of the code review process. Often these are run after the build process. Tools like FxCop (now integrated into Visual Studio) and NDepend can spot all kinds violations of coding guidelines, or over-complex code. The feedback must be hard to ignore though. I’ve found that simply running these tools doesn’t make people take notice of their output. Really, the build should break when problems are discovered with these tools, making their findings impossible to ignore.

Even better would be to review your code as you are working. I’ve been trying out ReSharper recently, and I’m impressed. It makes problems with your code very obvious while you are developing. It’s a bit of a shame that it doesn’t seem to have built in checks for high cyclomatic complexity or methods too long, although I’d imagine there is a plugin for that somewhere.

Obviously there is still a place for manual code reviews, and tools that run on the build machine, but anything that can be validated while I am in the process of coding should be. Don’t make me wait to find out what’s wrong with my code if I can know now.

UI Development

Another aspect of coding in which I want the fastest possible feedback is in UI development. That’s why we have tools like the XAML designer in Visual Studio that previews what you are making while you edit the XAML. I wonder whether even this could be taken further, with a live instance of the object you are designing running, with the ability to data bind it to any custom data on the fly. 

Conclusion

We’re seeing a lot of progress in developer tools that give you fast and automatic feedback, but I think there is still plenty of room for further innovation in this area. It is well known that the closer to development bugs are found, the quicker they are to fix. The logical implication of that is that we will go fastest when our development IDEs verify as much as is possible while we still typing the code.

I'd be interested to hear your thoughts on additional types of feedback we could get our IDE’s to give us while we are in the process of coding.

Saturday, 15 September 2012

A XAML Loop Icon

I needed a XAML loop icon recently, so here’s what I came up with:

<Canvas>
  <Path Stroke="Black" StrokeThickness="2" Data="M 5,3 h 20 a 5, 5, 180, 1, 1, 0, 10 h -20 a 5, 5, 180, 1, 1, 0, -10 Z" />
  <Path Stroke="Black" Fill="Black" StrokeThickness="2"  StrokeLineJoin="Round" Data="M 11,0 l 8,3 l -8, 3  Z" />
</Canvas>

And this is what it looks like:

image

Friday, 14 September 2012

Using a WrapPanel with ItemsControl and ListView

I recently needed to put some items inside a WrapPanel in WPF, which I wanted to scroll vertically if there were too many items to fit into the available space. I was undecided on whether I wanted to use an ItemsControl or a ListView (which adds selected item capabilities), and discovered that when you switch between the two containers, the technique for getting the WrapPanel working is subtly different.

ItemsControl is the simplest. Just set the ItemsPanelTemplate to be a WrapPanel, and then put the whole thing inside a ScrollViewer (sadly you can’t put the ScrollViewer inside the ItemsPanelTemplate):

<ScrollViewer>    
  <ItemsControl>
    <ItemsControl.ItemsPanel>
      <ItemsPanelTemplate>
        <WrapPanel />
      </ItemsPanelTemplate>
    </ItemsControl.ItemsPanel>
    <Rectangle Margin="5" Width="100" Height="100" Fill="Beige" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="PowderBlue" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF9ACD32" />    
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFF6347" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF6495ED" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFFA500" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFFD700" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFF4500" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF316915" />    
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF8E32A7" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFECBADC" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFE6D84F" />
  </ItemsControl>
</ScrollViewer>

this produces the following results:

image

But if you switch to an ItemsView instead, you’ll find that you get a single row of items with a horizontal scrollbar while the outer ScrollViewer has nothing to do.

image

The solution is to disable the horizontal scrollbar of the ListView itself:

<ListView ScrollViewer.HorizontalScrollBarVisibility="Disabled">

This allows our top-level ScrollViewer to work as with the ItemsControl and we have selection capabilities as well:

image

The full XAML for the ListView with vertically scrolling WrapPanel is:

<ScrollViewer>    
  <ListView ScrollViewer.HorizontalScrollBarVisibility="Disabled">
    <ListView.ItemsPanel>
      <ItemsPanelTemplate>
        <WrapPanel />
      </ItemsPanelTemplate>
    </ListView.ItemsPanel>
    <Rectangle Margin="5" Width="100" Height="100" Fill="Beige" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="PowderBlue" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF9ACD32" />    
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFF6347" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF6495ED" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFFA500" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFFD700" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFFF4500" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF316915" />    
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FF8E32A7" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFECBADC" />
    <Rectangle Margin="5" Width="100" Height="100" Fill="#FFE6D84F" />
  </ListView>
</ScrollViewer>

Thursday, 13 September 2012

NAudio OffsetSampleProvider

I’ve added a new class to NAudio ready for the 1.6 release called the OffsetSampleProvider, which is another utility class implementing the ISampleProvider interface.

It simply passes through audio from a source ISampleProvider, but with the following options:

  1. You can delay the start of the source stream by using the DelayBySamples property. So if you want to insert a few seconds of silence, you can use this property.
  2. You can discard a certain number of samples from your source using the SkipOverSamples property
  3. You can limit the number of samples you read from the source using the TakeSamples property. If this is 0, it means take the whole thing. If it is any other value, it will only pass through the specified number of samples from the source.
  4. You can also add a period of silence to the end by using the LeadOutSamples property.

You can convert a TimeSpan to a number of samples using the following logic (Remember to multiply by channels). I may add a helper method to OffsetSampleProvider that can do this for you in future.

int sampleRate = offsetSampleProvider.WaveFormat.SampleRate;
int channels = offsetSampleProvider.WaveFormat.Channels;
TimeSpan delay = TimeSpan.FromSeconds(1.7); // set to whatever you like
int samplesToDelay = (int)(sampleRate * delay.TotalSeconds) * channels;
offsetSampleProvider.DelayBySamples = samplesToDelay;

It’s a fairly simple class, but it is quite powerful. You might use it for inputs to a mixer, where you want to delay each input by a certain amount to align the audio properly. Or you might use it to cut a piece out of a longer section of audio.

Note that the skipping over is implemented by reading from the source because ISampleProvider does not support repositioning. So if your source is (say) an AudioFileReader, it would perhaps be better to use the Position property to get to the right place before handing it to OffsetSampleReader.

Wednesday, 12 September 2012

ASIO Recording in NAudio

It’s been a long time coming, but I’ve finally checked in support for ASIO recording in NAudio. It’s not actually too difficult to implement, but the main problem has been finding the time for it, and how best to present the feature within the NAudio API. I decided in the end to just do the simplest thing that works. This means simply extending the AsioOut class to allow you to optionally record at the same time as playing with NAudio.

To initiate ASIO Recording, there is a new InitRecordAndPlayback method, which allows you to specify how many channels to record. If you are only recording, and not playing back as well, then you need to tell it what sample rate you would prefer in a third parameter (leaving the input IWaveProvider null).

this.asioOut.InitRecordAndPlayback(null, recordChannelCount, 44100);

I’ve also added an InputChannelOffset property, which means you can skip over some of the input channels to select just the input you want. Obviously in the future it would be better to let you pick explicitly which inputs you want to record from.

To start recording (and playback), you simply call the Play method, and you must explicitly stop recording yourself. You are alerted of new audio available via the AudioAvailable event. This gives you an AudioAvailableEventArgs which gives you direct access to the ASIO driver’s record buffers for maximum performance, along with AsioSampleType which tells you what format the audio is in:

public class AsioAudioAvailableEventArgs : EventArgs
{

    /// <summary>
    /// Pointer to a buffer per input channel
    /// </summary>
    public IntPtr[] InputBuffers { get; private set; }

    /// <summary>
    /// Number of samples in each buffer
    /// </summary>
    public int SamplesPerBuffer { get; private set; }

    /// <summary>
    /// Audio format within each buffer
    /// Most commonly this will be one of, Int32LSB, Int16LSB, Int24LSB or Float32LSB
    /// </summary>
    public AsioSampleType AsioSampleType { get; private set; }

    /// <summary>
    /// Converts all the recorded audio into a buffer of 32 bit floating point samples, interleaved by channel
    /// </summary>
    /// <returns>The samples as 32 bit floating point, interleaved</returns>
    public float[] GetAsInterleavedSamples() ...
}

The GetAsInterleavedSamples is a helper method to make working with the recorded audio easier. It creates a float array, and reads the recorded samples into that, converting from 16 or 24 bit if necessary (only the most common ASIO sample types are supported). This saves you from having to write your own unsafe C# code if you are not comfortable with that. Remember that this callback is happening from within the ASIO driver itself, so you must return as quickly as possible. If you crash while in this callback you could find your ASIO driver fails and you may need to reboot your computer to recover the audio device.

I’ve updated the NAudioDemo application to include a simple demonstration of ASIO recording in action, so if you can, test it out with your soundcard and let me know how you get on. I am aware that the public interface for this feature is a bit of a hack, but time does not permit me at the moment to polish it further. Hopefully a future version of NAudio will feature an improved ASIO API, but for NAudio 1.6 there will at least be a way to do recording with ASIO.

Tuesday, 11 September 2012

New Open Source Project–GraphPad

I’ve open sourced a simple utility I wrote earlier this year, when I was preparing to give a talk on DVCS. It’s called GraphPad, and it’s a simple tool for visualising Directed Acyclic Graphs, with the ability to define your own with a basic syntax, or import history from a Git or Mercurial repository.

image

The idea behind it was that I would be able to use it in my talk to show what the DAG would look like for various. The really tricky thing is coming up with a good node layout algorithm, and mine is extremely rudimentary.

What’s more there are some very nice ones being developed now, particularly for Git, such as SeeGit or the very impressive “dragon” tool that comes with the Git Source Control Provider for Visual Studio, both of which are also WPF applications. Mine does at least have the distinction of being the only one I know of that also works with Mercurial.

For now, I am not actively developing this project, but I thought I’d open source it in case anyone has a use for it and wants to take it a bit further (the next step was to make the nodes show the commit details in a nice looking tooltip for Git/Hg repositories, as well as showing which nodes the various branch and head/tip labels are pointing at).

You can find it here on bitbucket.

Monday, 10 September 2012

Windows 8–First Impressions

I thought I’d post a few first impressions having actually used a Windows 8 machine for a day after setting my laptop up to boot Win 8 from VHD.

I like the fact that you can search just by typing in the start screen, although it seems a little unintuitive, and sometimes it appears to have found nothing, when actually it has discovered some matching applications and you need to click again to actually see the matches.

The new task manager is very cool. I’m especially pleased that you can bring a processes child windows to the front with it, which is something I used to need SysInternals Process Explorer for. It helps you to recover applications that appear to have hung because they are showing a message box or save as dialog that is not visible.

I’ve only briefly played with the default applications. I like the fact that the Sports one lets you specify your favourite team. It still needs a bit of tweaking as there are confusing ‘results’ shown for future fixtures, but its a good start. The Weather application using my location to find where I am is also a nice touch. I also like that it comes with a SkyDrive app by default as well.

My biggest annoyance so far is that when you use Alt-tab or Windows-tab to cycle through open applications, it includes the Metro apps as well. This is a pain because you can’t close Metro apps. If you’ve clicked through a bunch of Metro apps, then they will forever clutter up your alt-tab experience.

The Windows Store is rather sparsely populated, and once again it was not obvious to me how to search. Hopefully it will have good search capabilities as I doubt any app I create will ever be one of the top apps in any category.

The lack of a start menu in desktop mode still feels weird to me, and I still would like the metro interface to be available as a floating window in desktop mode, but I’ll give myself a bit longer to see if I can learn to live with this new way of working.

Saturday, 8 September 2012

Installing and Booting Windows 8 RTM off a VHD (Virtual Hard Disk)

I discovered this week that simply having Visual Studio 2012 is not enough to write WinRT applications (Metro/Windows Store apps); you also need to have Windows 8 as your Operating System. Although the RTM Windows 8 is available now from MSDN, I’m not quite ready yet to switch my main laptop over to it, so a virtual machine is the way forward. Unfortunately though, Windows Virtual PC refuses to install Windows 8 and while Oracle VirtualBox can run it, performance isn’t exactly brilliant.

Fortunately, about a year ago, Scott Hanselman wrote a brilliant blog post describing how to use the boot to VHD functionality in Windows 7 to install Windows 8 as an alternative boot option, only running off a VHD you create. I was too nervous to try this out when he originally posted it, but I finally plucked up the courage and did my first Win 8 booting from VHD install today.

The instructions Scott gives are for the Windows 8 developer preview, so I thought I’d briefly review the steps, noting a couple of very small differences for the RTM and a few other things I noticed along the way:

Step 1 – Create a bootable USB stick
The instructions in Scott Hanselman’s post are very easy to follow if you have the Win 8 RTM ISO. Just download the utility and away you go. The space requirements are smaller – you should be able to get away with a 4GB USB stick, although I used a 16GB one.

Step 2 – Make a VHD
This is nice and easy using the “Disk Management” tool in Windows (or follow Scott’s instructions to use the DISKPART command-line). I went for an 80GB dynamically expanding VHD.

Step 3 – Write stuff down
Remember exactly where you put that VHD, and you might also want to have a Windows 8 product key handy. You’ll probably need a bunch of other passwords too, most notably your Windows Live one, and your wireless network.

Step 4 – Boot from the USB stick
Scott says F12 is the key you need to press to select a boot device, and it worked for me on my DELL XPS laptop.

Step 5 – Select new install
The Windows 8 RTM setup offers an upgrade option that the developer preview didn’t, so you need to say that you are doing a fresh install.

Step 6 – Attach the VHD within setup
Scott’s instructions are very good here. The keyboard shortcut you need to remember to get to a command prompt is Shift-F10. The other thing I noticed was that my Win 7 C drive was actually called the E drive, so if you have multiple disk partitions, find out which one your VHD is in before using DISKPART.

Step 7 – Select the VHD partition to install to
After you have refreshed the list of available partitions to install to, your VHD ought to be obvious (e.g. mine was the 80GB one), but take great care that you choose the right one. Scott’s instructions say that you will get a warning message before you proceed, but I didn’t, leaving me worried that I might have chosen the wrong one!

Step 8 – Let it reboot and finish setting up Windows 8
The process creates a very nice looking new boot menu when your computer starts up, which Scott shows screenshots of. This gives me the choice between Windows 8 and 7. Windows 8 is set the default, but you get to change the defaults if you like, so I set mine back to default to Win 7. Update: After a few reboots, my computer has switched back to using the boring Windows 7 text based boot menu. I'm not quite sure why this happened, but possibly was after I had a audio driver crash in Windows 7.

Step 9 – Accessing data between Win 7 and Win 8
One nice thing is that when you boot from the VHD, your Windows 7 drive containing the VHD gets mounted as a drive within Windows 8, making it easy to access any data (although you do need to let it change the folder permissions to look into your User folder). And vice-versa, on Win 7 you can easily use the Disk Management tool to mount the Windows 8 VHD and copy files on or off it as you wish.

Step 10 – Cleaning up
This is the step I haven’t done yet, but probably at some point I will decide I don’t want to use this VHD anymore. Obviously you wouldn’t want to just delete the VHD as you’d end up with an entry in your boot menu that won’t work anymore. Hopefully there is a way to take the Win 8 VHD out of the boot menu. I’ll update this blog post with a link if I find instructions somewhere.
From beginning to end, the process of setting up my laptop to boot to Win 8 from a VHD took me just over an hour, and I would recommend it to anyone wanting to get started with Win 8 development.

Monday, 3 September 2012

Screen-Scraping in C# using LINQPad and HTML Agility Pack

I am a big fan of LINQPad for prototyping small bits of code, but every now and then you find you need to reference another library. LINQPad, does allow this by editing the Query Properties and adding a reference to a dll in the GAC or at a specific path. But the new LINQPad beta release makes life even easier by allowing you to reference NuGet packages.

I recently wanted to do a simple bit of screen-scraping, to extract the results from a web page containing football scores. By examining the HTML with FireBug I was quickly able to see that each month’s fixtures were in a series of tables each with a header row and then one row per result. The HTML looked something like this:

<table class="fixtures" width="502" border="0">
<tbody>
    <tr class="fixture-header">
        <th class="first" scope="col" colspan="6">November</th>
        <th class="goals-for" title="Goals For" scope="col">F</th>
        <th class="goals-against" title="Goals Against" scope="col">A</th>
        <th class="tv-channel" scope="col">
        <th class="last" scope="col"> </th>
    </tr>
    <tr class="home">
        <td class="first ">04</td>
        <td class="month"> Wed </td>
        <td class="fixture-icon">
        <td class="competition">UEFA Champions League</td>
        <td class="home-away">H</td>
        <td class="bold opposition ">
        <td class="goals-for"> 4 </td>
        <td class="goals-against"> 1 </td>
        <td class="tv-channel"> </td>
        <td class="menu-button" valign="middle">
    </tr>

To be able to navigate around HTML in .NET, by far the best library I have found is the HTML Agility Pack. Adding this to your LINQPad Query is very simple with the new beta. Press F4 to bring up Query Properties, then click Add NuGet, find the Html Agility Pack in the list and click Add To Query.

Now we are ready to load the document and find all the tables with the class of “fixtures”. You can use a special XPath syntax to do this in one step:

var web = new HtmlAgilityPack.HtmlWeb();
var doc = web.Load("http://www.arsenal.com/fixtures/fixtures-reports?season=2009-2010&x=11&y=15");
foreach(var fixturesTable in doc.DocumentNode.SelectNodes("//table[@class='fixtures']"))
{
    // ...
}

Having got each fixture table, I then ignore the top row (which has a class of “fixture-header”), and use the classes on each of the table columns to pull out the information I am interested in. Finally, I use the handy Dump extension method in LINQPad to output my information to the results window:

foreach(var fixture in fixturesTable.SelectNodes("tr"))
{
    var fixtureClass = fixture.Attributes["class"];
    // header rows have class of fixture-header
    if(fixtureClass == null || !fixtureClass.Value.Contains("fixture-header"))
    {
        var day = fixture.SelectSingleNode("td[@class='first ']").InnerText.Trim();
        var month = fixture.SelectSingleNode("td[@class='month']").InnerText.Trim();
        var venue = fixture.SelectSingleNode("td[@class='home-away']").InnerText.Trim();
        var oppositionNode = fixture.SelectNodes("td").FirstOrDefault(n => n.Attributes["class"].Value.Contains("opposition"));
        var opposition = oppositionNode.SelectSingleNode("a").InnerText.Trim();
        var matchReportUrl = oppositionNode.SelectSingleNode("a").Attributes["href"].Value.Trim();
        var goalsFor = fixture.SelectSingleNode("td[@class='goals-for']").InnerText.Trim();
        var goalsAgainst = fixture.SelectSingleNode("td[@class='goals-against']").InnerText.Trim();
        string.Format("{0} {1} {2} {3} {4}-{5}", day, month, venue, opposition, goalsFor, goalsAgainst).Dump();
    }
}

This gives me the data I am after, and from here it is easy to convert it into any other format I want such as XML or insert it into a database (something that LINQPad also makes very easy).

04 Sun H Blackburn Rovers 6-2
17 Sat H Birmingham City 3-1
20 Tue A AZ Alkmaar 1-1
25 Sun A West Ham United 2-2
28 Wed H Liverpool 2-1
31 Sat H Tottenham Hotspur 3-0
04 Wed H AZ Alkmaar 4-1
07 Sat A Wolverhampton W. 4-1

And the really nice thing about using LINQPad for this rather than creating a Visual Studio project is that the entire thing is stored in a single compact .linq file without all the extraneous noise of sln, csproj, AssemblyInfo files etc.