Friday, 28 February 2014

Should NAudio use SharpDX for MediaFoundation support?

In NAudio 1.7 I introduced Media Foundation support into NAudio. The main classes are MediaFoundationReader, MediaFoundationEncoder, MediaFoundationResampler and MediaFoundationTransform. Those four classes actually cover the bulk of what most people would want to do with Media Foundation, but they required a huge amount of interop, and there are still a few areas yet to complete which are proving rather tricky.

But I noticed a while ago that SharpDX has actually already got the vast majority of Media Foundation interop made available in its SharpDX.MediaFoundation assembly. It got me thinking, what if NAudio simply relied on the work done in SharpDX rather than creating its own interop wrappers? There are several advantages and disadvantages to taking this approach.

Advantages...

Completeness
A large part of SharpDX is auto-generated, which means that it contains pretty much every interface, every API call, every enumeration, and every Guid you could ever need. By contrast, the NAudio wrappers are done by hand, so there are a number of bits missing. In particular SharpDX has done the hard work of implementing the reading and encoding from standard .NET streams which is something I've really wanted to add to NAudio but it has proved very complicated to implement. SharpDX also includes interop wrappers that allow access to the "presentation attributes" which would be a nice enhancement.

Cross-Threading
SharpDX uses a fancy post-compile trick which allows COM objects created on one thread to be used on another. NAudio needs something like this, as the current workaround I use is a bit of a hack (basically recreate the MF source reader on the new thread). However, I will confess to not fully understanding how the SharpDX technique works, which is why I haven’t yet borrowed the technique for NAudio.

Closing
SharpDX has a really nice approach to creating wrappers for COM objects, which makes them disposable objects. By contrast I simply have been trying to call Marshal.ReleaseComObject in all the right places in NAudio. I try to hide this as much as possible from users of NAudio, but there are places where it does leak out leaving the potential for a memory leak.

Collaboration
I always find it slightly depressing that so many open source projects decide to compete rather than collaborate with each other. If we spent less time re-inventing the wheel and more time building upon what others have already created, I’m sure we could make some amazing software.

So I could just decide to let SharpDX do what it does well (wrap COM-based Windows APIs), and then NAudio could focus on providing helpful ways to construct your audio graph (which for me is the interesting bit). What's more, if NAudio used SharpDX, it could result in enhancements being submitted to SharpDX (I've already made a few contributions as part of my experimentation).

Disadvantages...

So there’s a lot in favour of using SharpDX, but there are some disadvantages that need to be weighed up.

Dependencies
We'd need to make NAudio depend on two additional DLLs – SharpDX.dll and SharpDX.MediaFoundation.dll. As well as increasing the overall size of the dependencies, this could also cause some namespace confusion as NAudio and SharpDX both have classes with the same name (WaveFormat is a prime example). There would also be potential for versioning conflicts if NAudio was introduced into a project that was using a different version of SharpDX.

Control
By depending on an external library like SharpDX, we'd lose a measure control over all the interop. Currently NAudio's interop is hand-crafted to be exactly how I want it. But with SharpDX, I'd need to submit pull-requests for all the tweaks I want. I'd also be dependent on the release schedule of SharpDX - a new version of NAudio would need to depend on an official release of SharpDX, not a special build. Having said that, Alexandre Mutel has been quick to accept my pull requests so far and there isn’t a pressing need for any more to be made.

Nearly There?
It may be that I'm not that far off at all. If I get read and write from a stream working, and can solve the threading issue, then the main motivation for building on SharpDX would disappear. But I can't really tell how close I am to getting it all working just how I want it.

Trying it Out

The good news is that I can trial all of this without needing to change NAudio at all. I've made a new library that depends on NAudio and SharpDX, and offers alternative implementations of NAudio's four Media Foundation classes. I've called them SharpMediaFoundationReader, SharpMediaFoundationEncoder, SharpMediaFoundationResampler and SharpMediaFoundationTransform. It’s called NAudio.SharpMediaFoundation and it’s on GitHub.

Once a version of SharpDX is released containing my customisations, I can release this as its own NuGet package, then that would allow people who want/need the features SharpDX can offer to take advantage of it without any disruption to the existing NAudio library. So maybe I can have the best of both worlds.

Do let me know if you have any feedback on this approach to MediaFoundation. I’ll announce on my blog when an official release of NAudio.SharpMediaFoundation is available.

Wednesday, 12 February 2014

TypeScript Tetris

Over a decade ago I decided to teach myself Java by writing a Tetris game which I made as an “applet” embedded in a webpage. The game itself worked fine, but Java in the browser never really took off, and so no one was actually able to play my masterpiece.

image

Every now and then I would think about trying to port it to Javascript with a HTML 5 canvas. But one of the frustrating things about Javascript (from a Java or C# developer’s perspective) is it’s rather peculiar approach to object inheritance, which I hadn’t got round to learning.

So when TypeScript was announced, with its greatly simplified syntax for classes, I thought it might be worth giving this another try. And it turned out to be surprisingly easy to port. In fact I got it working shortly after the first version of TypeScript was released, but I never got round to blogging about it. Here’s some notes:

The HTML

There wasn’t much that needed to be done in the HTML, except to create a HTML 5 canvas object for us to draw on. Probably there is some cool trick web developers use to pick the optimal size for the game based on your browser size, but I just went for a fixed size canvas for now.

<canvas id="gameCanvas" width="240" height="360"></canvas>

The Shape Classes

In my original Java code I had a Shape base class, with methods like move, drop, rotate etc, and a series of classes derived from it representing the different Tetris shapes (square, L-shape, T-shape etc).

These were the easiest to convert from Java into TypeScript. The only real difference was that Javascript’s arrays are slightly different to Java arrays. You declare an empty one, and then “push” elements into it. Here’s the base "Shape” class (I made my own Point type, as I don’t think JavaScript has a built in one):

class Shape {
    public points: Point[]; // points that make up this shape
    public rotation = 0; // what rotation 0,1,2,3
    public fillColor;

    private move(x: number, y: number): Point[] {
        var newPoints = [];

        for (var i = 0; i < this.points.length; i++) {
            newPoints.push(new Point(this.points[i].x + x, this.points[i].y + y));
        }
        return newPoints;
    }

    public setPos(newPoints: Point[]) {
        this.points = newPoints;
    }

    // return a set of points showing where this shape would be if we dropped it one
    public drop(): Point[] {
        return this.move(0, 1);
    }

    // return a set of points showing where this shape would be if we moved left one
    public moveLeft(): Point[] {
        return this.move(-1, 0);
    }

    // return a set of points showing where this shape would be if we moved right one
    public moveRight(): Point[] {
        return this.move(1, 0);
    }

    // override these
    // return a set of points showing where this shape would be if we rotate it
    public rotate(clockwise: boolean): Point[] {
        throw new Error("This method is abstract");
    }
}

and here’s an example of a derived shape:

class TShape extends Shape {
    constructor (cols: number) {
        super();
        this.fillColor = 'red';
        this.points = [];
        var x = cols / 2;
        var y = -2;
        this.points.push(new Point(x - 1, y));
        this.points.push(new Point(x, y)); // point 1 is our base point
        this.points.push(new Point(x + 1, y));
        this.points.push(new Point(x, y + 1));
    }

    public rotate(clockwise: boolean): Point[] {
        this.rotation = (this.rotation + (clockwise ? 1 : -1)) % 4;
        var newPoints = [];
        switch (this.rotation) {
            case 0:
                newPoints.push(new Point(this.points[1].x - 1, this.points[1].y));
                newPoints.push(new Point(this.points[1].x, this.points[1].y));
                newPoints.push(new Point(this.points[1].x + 1, this.points[1].y));
                newPoints.push(new Point(this.points[1].x, this.points[1].y + 1));
                break;
            case 1:
                newPoints.push(new Point(this.points[1].x, this.points[1].y - 1));
                newPoints.push(new Point(this.points[1].x, this.points[1].y));
                newPoints.push(new Point(this.points[1].x, this.points[1].y + 1));
                newPoints.push(new Point(this.points[1].x - 1, this.points[1].y));
                break;
            case 2:
                newPoints.push(new Point(this.points[1].x + 1, this.points[1].y));
                newPoints.push(new Point(this.points[1].x, this.points[1].y));
                newPoints.push(new Point(this.points[1].x - 1, this.points[1].y));
                newPoints.push(new Point(this.points[1].x, this.points[1].y - 1));
                break;
            case 3:
                newPoints.push(new Point(this.points[1].x, this.points[1].y + 1));
                newPoints.push(new Point(this.points[1].x, this.points[1].y));
                newPoints.push(new Point(this.points[1].x, this.points[1].y - 1));
                newPoints.push(new Point(this.points[1].x + 1, this.points[1].y));
                break;
        }
        return newPoints;
    }
}

Drawing

My application also had a “Grid” class, which was responsible for managing what shapes were present on the board, and for rendering it as well. So it needs the HTMLCanvasElement, and draws onto it with a CanvasRenderingContext2D. Thankfully the methods on the rendering context are actually quite close to the Java ones. We chose the colour with a call to context.fillStyle, and then draw a rectangle with context.fillRect.

class Grid {
    private canvas: HTMLCanvasElement;
    private context: CanvasRenderingContext2D;
    private rows: number;
    public cols: number;
    public blockSize: number;
    private blockColor: any[][];
    public backColor: any;
    private xOffset: number;
    private yOffset: number;

    constructor (rows: number, cols: number, blockSize: number, backColor, canvas: HTMLCanvasElement) {
        this.canvas = canvas;
        this.context = canvas.getContext("2d");
        this.blockSize = blockSize;
        this.blockColor = new Array(rows);
        this.backColor = backColor;
        this.cols = cols;
        this.rows = rows;
        for (var r = 0; r < rows; r++) {
            this.blockColor[r] = new Array(cols);
        }
        this.xOffset = 20;
        this.yOffset = 20;
    }

    public draw(shape: Shape) {
        this.paintShape(shape, shape.fillColor);
    }

    public erase(shape: Shape) {
        this.paintShape(shape, this.backColor);
    }

    private paintShape(shape: Shape, color) {
        for (var i = 0; i < shape.points.length; i++) {
            this.paintSquare(shape.points[i].y, shape.points[i].x, color);
        }
    }

    // check the set of points to see if they are all free
    public isPosValid(points: Point[]) {
        var valid: boolean = true;
        for (var i = 0; i < points.length; i++) {
            if ((points[i].x < 0) ||
                (points[i].x >= this.cols) ||
                (points[i].y >= this.rows)) {
                valid = false;
                break;
            }
            if (points[i].y >= 0) {
                if (this.blockColor[points[i].y][points[i].x] != this.backColor) {
                    valid = false;
                    break;
                }
            }
        }
        return valid;
    }

    public addShape(shape: Shape) {
        for (var i = 0; i < shape.points.length; i++) {
            if (shape.points[i].y < 0) {
                // a block has landed and it isn't even fully on the grid yet
                return false;
            }
            this.blockColor[shape.points[i].y][shape.points[i].x] = shape.fillColor;
        }
        return true;
    }

    public eraseGrid() {
        this.context.fillStyle = this.backColor;
        var width = this.cols * this.blockSize;
        var height = this.rows * this.blockSize;

        this.context.fillRect(this.xOffset, this.yOffset, width, height);
    }

    public clearGrid() {
        for (var row = 0; row < this.rows; row++) {
            for (var col = 0; col < this.cols; col++) {
                this.blockColor[row][col] = this.backColor;
            }
        }
        this.eraseGrid();
    }

    private paintSquare(row, col, color) {
        if (row >= 0) { // don't paint rows that are above the grid
            this.context.fillStyle = color;
            this.context.fillRect(this.xOffset + col * this.blockSize, this.yOffset + row * this.blockSize, this.blockSize - 1, this.blockSize - 1);
        }
    }

    public drawGrid() {
        for (var row = 0; row < this.rows; row++) {
            for (var col = 0; col < this.cols; col++) {
                if (this.blockColor[row][col] !== this.backColor) {
                    this.paintSquare(row, col, this.blockColor[row][col]);
                }
            }
        }
    }

    public paint() {
        this.eraseGrid();
        this.drawGrid();
    }

    // ... a few more methods snipped for brevity

}

Keyboard Handling

The game is controlled by the keyboard, both for moving pieces and for starting/pausing the game. In the Game class constructor I subscribe to the keydown event with the following code:

        var x = this;
        document.onkeydown = function (e) { x.keyhandler(e); }; // gets the wrong thing as this, so capturing the right this

One slight disappointment with TypeScript is that it doesn’t do anything to fix the weirdness around JavaScript’s “this” keyword. In JavaScript, “this” isn’t always what you think it would be if you’ve got a background in Java/C#. I had to resort to little hacky tricks like this in various places to make sure the right “this” object would be available in the called method. I guess if I did more Javascript this would be second nature to me. Here’s the keyboard handler:

private keyhandler(event: KeyboardEvent) {
    var points;
    if (this.phase == Game.gameState.playing) {
        switch (event.keyCode) {
            case 39: // right
                points = this.currentShape.moveRight();
                break;
            case 37: // left
                points = this.currentShape.moveLeft();
                break;
            case 38: // up arrow
                points = this.currentShape.rotate(true);
                break;
            case 40: // down arrow
                // erase ourself first
                points = this.currentShape.drop();
                while (this.grid.isPosValid(points)) {
                    this.currentShape.setPos(points);
                    points = this.currentShape.drop();
                }

                this.shapeFinished();
                break;
        }

        switch (event.keyCode) {
            case 39: // right
            case 37: // left
            case 38: // up
                if (this.grid.isPosValid(points)) {
                    this.currentShape.setPos(points);
                }
                break;
        }
    }

    if (event.keyCode == 113) { // F2
        this.newGame();
        // loop drawScene

        // strange code required to get the right 'this' pointer on callbacks
        // http://stackoverflow.com/questions/2749244/javascript-setinterval-and-this-solution
        this.timerToken = setInterval((function (self) {
            return function () { self.gameTimer(); };
        })(this), this.speed);
    }
    else if (event.keyCode == 114) { // F3
        if (this.phase == Game.gameState.paused) {
            this.hideMessage();
            this.phase = Game.gameState.playing;
            this.grid.paint();
        }
        else if (this.phase == Game.gameState.playing) {
            this.phase = Game.gameState.paused;
            this.showMessage("PAUSED");
        }
    }
    else if (event.keyCode == 115) { // F4
        if ((this.level < 10) && (this.phase == Game.gameState.playing) || (this.phase == Game.gameState.paused)) {
            this.incrementLevel();
            this.updateLabels();
        }
    }
}

Timer

Any game needs a timer loop, and my original one would drop the falling block and paint the grid each tick. But I was also painting the falling brick from the keyboard handler whenever you moved it. So I refactored things slightly to have a rendering loop, which was doing all the drawing, and then a game timer, which dropped the current block and decided if you had lost, or progressed to the next level.

I discovered that the recommended way to create a render loop in Javascript seems to be window.requestAnimationFrame (with a fallback to window.setTimeout if that’s not available). For the game timer itself I carried on using window.setTimeout. Again, you have to jump through some hoops to get the right this pointer:

this.timerToken = setInterval((function (self) {
    return function () { self.gameTimer(); };
})(this), this.speed);

Messages

The final task was to show messages such as “Game over” and “Game paused”. I was in two minds about how to do this. I could either use place a div over the top of my canvas with the message in, or use fonts to draw onto the canvas. I initially went about trying to draw messages onto the canvas, which did work, but in the end I decided that simply positioning a floating div over the canvas to show messages was easier (although even that was quite frustrating to work out the correct CSS incantation).

I came up with this HTML and CSS to give me a floating message.

<div id="container">
    <canvas id="gameCanvas" width="240" height="360" ></canvas>
    <div id="floatingMessage" ></div>
</div>
#container {
    position: relative;
    float: left;
    background-color: cornflowerblue;
}

#gameCanvas {
    height: 360px;
    width: 240px;
}

#floatingMessage {
    position: absolute; 
    top: 120px; 
    left: 60px;
width: 120px;
text-align: center; background-color: azure; }

There’s still a whole host of improvements I ought to make to both the appearance and functionality of this game, but the point of this exercise was simply to see how easily I could get started with TypeScript. And the good news is, it seems pretty straightforward, even for someone without a lot of web development experience.

Try It

I’ve uploaded the code for my TypeScript Tetris app to GitHub, and you can also play it here. (Please note, this is not intended as a “best practices” guide to anything. This was my first ever TypeScript app, ported from my first ever Java app. So there are quite a few rough edges. Feel free to send me your comments, or fork it and show me how it should be done).

Tuesday, 11 February 2014

Don’t Forget to Clean Your Code

I really love the idea of “Clean Code”. Keeping our classes and methods short, naming things well, and making sure our comments are actually helpful all makes a whole lot of sense. What’s even better about it is that its not a particularly complicated idea. It could be explained in a couple of hours, and even the most junior of developers shouldn’t have a problem grasping the main principles. It also doesn’t usually take too long to clean up dirty code. Rename some variables, extract some short methods out of long methods, and review the comments. The only slightly tricky bit is extracting a helper class out of a class that has grown too big.

So why don’t I write clean code all the time? Why isn’t all the code I’ve written since I heard about clean code a shining example of code cleanliness?

Well the brutal truth is, when I’m writing code, I’m not thinking about cleanness. I’m completely focused on what can I do to get the feature implemented. So at the point that I’m “done” in the sense of having working code and passing tests, my code is probably in a mess.

The trouble is, there is a huge temptation to skip the “cleaning” phase. After all, there’s always plenty of new development tasks waiting that I’m itching to get started on. But if we really believe that we spend ten times the amount of time reading code compared to writing it, then spending fifteen minutes cleaning our code before we move on will actually make us go much faster in the long-run.

Clean code is a habit you need to force yourself to learn. Clean before moving on to the next task. When the test goes green its time to clean. Clean before commit. If it’s not clean, it’s not complete.

PS: be sure to check out Corey House’s excellent Pluralsight course on Clean Code.

Saturday, 1 February 2014

Fire and Forget Audio Playback with NAudio

Every so often I get a request for help from someone wanting to create a simple one-liner function that can play an audio file with NAudio. Often they will create something like this:

// warning: this won't work
public void PlaySound(string fileName)
{
    using (var output = new WaveOut())
    using (var player = new AudioFilePlayer(fileName))
    {
        output.Init(player);
        output.Play();
    }
}

Unfortunately this won’t actually work, since the Play method doesn’t block until playback is finished – it simply begins playback. So you end up disposing the playback device almost instantaneously after beginning playback. A slightly improved option simply waits for playback to stop, creating a blocking call. It uses WaveOutEvent, as the standard WaveOut only works on GUI threads.

// better, but still not ideal 
public void PlaySound(string fileName)
{
    using (var output = new WaveOutEvent())
    using (var player = new AudioFilePlayer(fileName))
    {
        output.Init(player);
        output.Play();
        while (output.PlaybackState == PlaybackState.Playing)
        {
            Thread.Sleep(500);
        }
    }
}

This approach is better, but is still not ideal, since it now blocks until audio playback is complete. It is also not particularly suitable for scenarios in which you are playing lots of short sounds, such as sound effects in a computer game. The problem is, you don’t really want to be continually opening and closing the soundcard, or having multiple instances of an output device active at once. So in this post, I explain the approach I would typically take for an application that needs to regularly play sounds in a “Fire and Forget” manner.

Use a Single Output Device

First, I’d recommend just opening the output soundcard once. Choose the output model you want (e.g. WasapiOut, WaveOutEvent), and play all the sounds through it.

Use a MixingSampleProvider

This means that to play multiple sounds simultaneously, you’ll need a mixer. I always recommend mixing with 32 bit IEEE floating point samples. and in NAudio the best way to do this is through using the MixingSampleProvider class.

Play Continuously

Obviously there are times when your application won’t be playing any sounds, so you could start and stop the output device whenever playback is idle. But it tends to be more straightforward to simply leave the soundcard running playing silence, and then just add inputs to the mixer. If you set the ReadFully property on MixingSampleProvider to true, it’s Read method will return buffers full of silence even when there are no mixer inputs. This means that the output device will keep playing continuously.

Use a Single WaveFormat

The one down-side of this approach is that you can’t mix together audio that doesn’t share the same WaveFormat. The bit depth won’t be a problem, since we are automatically converting everything to IEEE floating point. But if you are working with a stereo mixer, any mono inputs need to be made stereo before playing them. More annoying is the issue of sample rate conversion. If the files you need to play contain a mixture of sample rates, you’ll need to convert them all to a common value. 44.1kHz would be a typical choice, since this is likely to be the sample rate your soundcard is operating at.

Dispose Readers

The MixingSampleProvider has a nice feature where it will automatically remove an input whose Read method returns 0. However, it won’t attempt to Dispose that input for you, leaving you with a resource leak. The easiest way round this is to create a derived ISampleProvider class that encapsulates the AudioFileReader, and auto-disposes it when it reaches the end.

Cache Sounds

In a computer game scenario, you’ll likely be playing the same sounds again and again. You don’t really want to keep reading them from disk (and decoding them if they compressed). So it would be best to load the whole thing into memory, allowing us to replay many copies of it directly from the byte array of PCM data, using a RawSourceWaveStream. This approach has the advantage of allowing you to dispose the AudioFileReader immediately after caching its contents.

Source Code

That’s enough waffling, let’s have a look at some code that implements the features mentioned above. Let’s start with what I’ve called “AudioPlaybackEngine”. This is responsible for playing our sounds. You can either call PlaySound with a path to a file, for use with longer pieces of audio, or passing in a “CachedSound” for use with sound effects you want to play many times. I’ve included automatic conversion from mono to stereo, but no resampling is included here, so if you pass in a file of the wrong sample rate it won’t play:

class AudioPlaybackEngine : IDisposable
{
    private readonly IWavePlayer outputDevice;
    private readonly MixingSampleProvider mixer;

    public AudioPlaybackEngine(int sampleRate = 44100, int channelCount = 2)
    {
        outputDevice = new WaveOutEvent();
        mixer = new MixingSampleProvider(WaveFormat.CreateIeeeFloatWaveFormat(sampleRate, channelCount));
        mixer.ReadFully = true;
        outputDevice.Init(mixer);
        outputDevice.Play();
    }

    public void PlaySound(string fileName)
    {
        var input = new AudioFileReader(fileName);
        AddMixerInput(new AutoDisposeFileReader(input));
    }

    private ISampleProvider ConvertToRightChannelCount(ISampleProvider input)
    {
        if (input.WaveFormat.Channels == mixer.WaveFormat.Channels)
        {
            return input;
        }
        if (input.WaveFormat.Channels == 1 && mixer.WaveFormat.Channels == 2)
        {
            return new MonoToStereoSampleProvider(input);
        }
        throw new NotImplementedException("Not yet implemented this channel count conversion");
    }

    public void PlaySound(CachedSound sound)
    {
        AddMixerInput(new CachedSoundSampleProvider(sound));
    }

    private void AddMixerInput(ISampleProvider input)
    {
        mixer.AddMixerInput(ConvertToRightChannelCount(input));
    }

    public void Dispose()
    {
        outputDevice.Dispose();
    }

    public static readonly AudioPlaybackEngine Instance = new AudioPlaybackEngine(44100, 2);
}

The CachedSound class is responsible for reading an audio file into memory. Sample rate conversion would be best done in here as part of the caching process, so it minimises the performance hit of resampling during playback.

class CachedSound
{
    public float[] AudioData { get; private set; }
    public WaveFormat WaveFormat { get; private set; }
    public CachedSound(string audioFileName)
    {
        using (var audioFileReader = new AudioFileReader(audioFileName))
        {
            // TODO: could add resampling in here if required
            WaveFormat = audioFileReader.WaveFormat;
            var wholeFile = new List<float>((int)(audioFileReader.Length / 4));
            var readBuffer= new float[audioFileReader.WaveFormat.SampleRate * audioFileReader.WaveFormat.Channels];
            int samplesRead;
            while((samplesRead = audioFileReader.Read(readBuffer,0,readBuffer.Length)) > 0)
            {
                wholeFile.AddRange(readBuffer.Take(samplesRead));
            }
            AudioData = wholeFile.ToArray();
        }
    }
}

There’s also a simple helper class to turn a CachedSound into an ISampleProvider that can be easily added to the mixer:

class CachedSoundSampleProvider : ISampleProvider
{
    private readonly CachedSound cachedSound;
    private long position;

    public CachedSoundSampleProvider(CachedSound cachedSound)
    {
        this.cachedSound = cachedSound;
    }

    public int Read(float[] buffer, int offset, int count)
    {
        var availableSamples = cachedSound.AudioData.Length - position;
        var samplesToCopy = Math.Min(availableSamples, count);
        Array.Copy(cachedSound.AudioData, position, buffer, offset, samplesToCopy);
        position += samplesToCopy;
        return (int)samplesToCopy;
    }

    public WaveFormat WaveFormat { get { return cachedSound.WaveFormat; } }
}

And here’s the auto disposing helper for when you are playing from an AudioFileReader directly:

class AutoDisposeFileReader : ISampleProvider
{
    private readonly AudioFileReader reader;
    private bool isDisposed;
    public AutoDisposeFileReader(AudioFileReader reader)
    {
        this.reader = reader;
        this.WaveFormat = reader.WaveFormat;
    }

    public int Read(float[] buffer, int offset, int count)
    {
        if (isDisposed)
            return 0;
        int read = reader.Read(buffer, offset, count);
        if (read == 0)
        {
            reader.Dispose();
            isDisposed = true;
        }
        return read;
    }

    public WaveFormat WaveFormat { get; private set; }
}

With all this set up, now we can have our goal of using a very simple fire and forget syntax for playback:

// on startup:
var zap = new CachedSound("zap.wav");
var boom = new CachedSound("boom.wav");


// later in the app...
AudioPlaybackEngine.Instance.PlaySound(zap);
AudioPlaybackEngine.Instance.PlaySound(boom);
AudioPlaybackEngine.Instance.PlaySound("crash.wav");

// on shutdown
AudioPlaybackEngine.Instance.Dispose();

Further Enhancements

This is far from complete. Obviously I’ve not added in the Resampler stage here, and it would be nice to add a master volume level for the audio playback engine, as well as allowing you to set individual sound volume and panning positions. You could even have a maximum limit of concurrent sounds. But none of those enhancements are too hard to add.

I’ll try to get something like this added into the NAudio WPF Demo application, maybe with a few of these enhancements thrown in. For now, you can get at the code from this gist.