Friday 23 November 2012

Enabling NAudio for Windows 8 Store Apps–First Steps

One of my goals for NAudio 1.7 is to have a version available for Windows Store apps. Obviously there are a lot of classes in NAudio that simply won’t work with Windows Store apps, but I have been pleasantly surprised to discover that the bulk of the WASAPI and Media Foundation APIs are allowed. ACM, and all the rest of the old MME functions (waveIn.., waveOut…) are obviously not available. I’m not entirely sure what the status of DirectX Media Objects is (DMOs), but I suspect they are not available.

The first step was simply to create a Windows Store class library and see how much of the existing code I could move across. Here’s some notes on classes that I couldn’t move across

  • WaveFormatCustomMarshaller - not supported because there is no support for System.Runtime.InteropServices.ICustomMarshaller. This is a bit of a shame, but not a huge loss.
  • ProgressLog and FileAssociations in the Utils folder probably should have been kicked out of the NAudio DLL a long time ago. I’ll mark them as Obsolete
  • Some of the DMO interfaces were marked with System.Security.SuppressUnmanagedCodeSecurity. I can’t remember why I needed to do this. It may be irrelevant if Windows Store apps can’t use DMO. I’ve simply allowed the code to compile by hiding this attribute with #if !NETFX_CORE
  • One really annoying thing is that the Guid constructor has subtly changed, meaning that you can’t pass in unsigned int and shorts. It means that I had to put unchecked casts to short or int on lots of them
  • One apparent oversight is that COMException no longer has an error code property. I guess it might be available in the exception data dictionary. It was only needed for DMO so again it may not matter
  • The ApplicationException class has gone away, so I’ve replaced all instances of it with more appropriate exception types (usually InvalidDataException or ArgumentException)
  • The fact that there is no more GetSafeHandle on wait handles means that I will need to rework the WASAPI code to use CreateEventEx.
  • I’ve not bothered to bring across the Cakewalk drum map or sfz support. Both can probably be obsoleted from NAudio.
  • The AcmMp3FrameDecompressor is not supported, and I suspect that Media Foundation will become the main way to decode MP3s (with the other option being fully managed decoders for which I have a working prototype – watch this space)
  • Encoding.ASCIIEncoding is no longer present. Quite a bit of my code uses it, and I’ve switched to UTF8 for now even though it it is not strictly correct. I’ll probably have to make my own byte encoding utility for legacy file formats. Also Encoding.GetString has lost the overload that takes one parameter.
  • I had some very old code still using ArrayList removing it had some knock-on effects throughout the SoundFont classes (which I suspect very few people actually use).
  • WaveFileChunkReader will have to wait until RiffChunk gets rewritten to not depend on mmioToFourCC
  • Everything in the GUI namespace is WindowsForms and won’t come across
  • The Midi namespace I have left out for now. The classes for the events should move across, and the file reader writer will need reworking for Windows 8 file APIs. I don’t think windows store apps have any support for actual MIDI devices unfortunately.
  • The old Mixer API is not supported at all in Win 8. The WASAPI APIs will give some control over stream volumes.
  • ASIO – I’m assuming ASIO is not supported at all in Windows Store apps
  • The Compression folder has all the ACM stuff. None of this is supported in Windows Store apps.
  • The MmeInterop folder also doesn’t contain anything that is supported in Windows Store apps.
  • SampleProviders - all came across successfully. These are going to be a very important part of NAudio moving forwards
  • MediaFoundation (a new namespace), has come across successfully, and should allow converting MP3, AAC, and WMA to WAV in Windows Store apps. It will also be very useful for regular Windows apps on Vista and above. Expect more features to be added in this area in the near future..
  • WaveInputs – not much of this folder could be ported
    • WasapiCapture - needs rework to not use Thread or WaitHandle. Also I think the way you specify what device to use has changed in Windows Store apps
    • WasapiLoopbackCapture – I don’t know if Windows Store apps are going to support loopback capture, but I will try to see what is possible
    • I may revisit the IWaveIn interface, which I have never really been happy with, and come up with an IRecorder interface in the future,to make it easier to get at the samples as they are recorded (rather than just getting a byte array)
  • WaveOutputs:
    • WasapiOut – should work in Windows Store, but because it uses Thread and EventWaitHandle it needs some reworking
    • AsioOut, WaveOut, WaveOutEvent, DirectSoundOut  - not supported. For Windows Store apps, it will either be WasapiOut or possibly a new output device depending on what I find in the Windows RT API reference.
    • AiffFileWriter, CueWaveFileWriter , WaveFileWriter – all the classes that can write audio files need to be reworked as you can’t use FileStreams in Windows Store. I need to find a good approach to this that doesn’t require the Windows Store and regular .NET code to completely diverge. Suggestions welcome.
  • WaveProviders – mostly came across with a few exceptions:
    • MixingWaveProvider32 - used unsafe code, MixingSampleProvider should be preferred anyway
    • WaveRecorder – relies on WaveFileWriter which needs rework
  • WaveStream – lots of classes in this folder will need reworking for
    • WaveFileReader, AiffFileReader, AudioFileReader, CueWaveFileReader  all need to support Windows Store file APIs
    • Mp3FileReader – may be less important now we have MediaFoundationReader, but it still can be useful to have a frame by frame decode, so I’ll see if I can make a new IMp3FrameDecompressor that works in Windows Store apps.
    • RiffChunk – to be reworked
    • WaveInBuffer, WaveOutBuffer are no longer applicable (and should really be moved into the MmeInterop folder)
    • Wave32To16Stream – contains unsafe code, should be obsoleted anyway
    • WaveMixerStream32 – contains unsafe code, also should be obsoleted

So as you can see, there is plenty of work still to be done. There are a few additional tasks once I’ve got everything I wanted moved across.

  • I want to investigate all the new Media APIs (e.g transcoding) and see if NAudio can offer any value-add to using these APIs
  • Make a Windows Store demo app to show off and test what can be done. Would also like to test on a Surface device if possible (not sure if I’ll run into endian issues on ARM devices – anyone know?).
  • Update the nuget package to contain a Windows Store binary

Wednesday 21 November 2012

How to Drag Shapes on a Canvas in WPF

I recently needed to support dragging shapes on a Canvas in WPF. There are a few detailed articles on this you can read over at CodeProject (see here and here for example). However, I just needed something very simple, so here’s a short code snippet that you can try out using my favourite prototyping tool LINQPad:

var w = new Window();
w.Width = 600;
w.Height = 400;
var c = new Canvas();

Nullable<Point> dragStart = null;

MouseButtonEventHandler mouseDown = (sender, args) => {
    var element = (UIElement)sender;
    dragStart = args.GetPosition(element); 
    element.CaptureMouse();
};
MouseButtonEventHandler mouseUp = (sender, args) => {
    var element = (UIElement)sender;
    dragStart = null; 
    element.ReleaseMouseCapture();
};
MouseEventHandler mouseMove = (sender, args) => {
    if (dragStart != null && args.LeftButton == MouseButtonState.Pressed) {    
        var element = (UIElement)sender;
        var p2 = args.GetPosition(c);
        Canvas.SetLeft(element, p2.X - dragStart.Value.X);
        Canvas.SetTop(element, p2.Y - dragStart.Value.Y);
    }
};
Action<UIElement> enableDrag = (element) => {
    element.MouseDown += mouseDown;
    element.MouseMove += mouseMove;
    element.MouseUp += mouseUp;
};
var shapes = new UIElement [] {
    new Ellipse() { Fill = Brushes.DarkKhaki, Width = 100, Height = 100 },
    new Rectangle() { Fill = Brushes.LawnGreen, Width = 200, Height = 100 },
};


foreach(var shape in shapes) {
    enableDrag(shape);
    c.Children.Add(shape);
}

w.Content = c;
w.ShowDialog();

The key is that for each draggable shape, you handle MouseDown (to begin a mouse “capture”), MouseUp (to end the mouse capture), and MouseMove (to do the move). Obviously if you need dragged objects to come to the top in the Z order, or to be able to auto-scroll as you drag, you’ll need to write a bit more code than this. The next obvious step would be to turn this into an “attached behaviour” that you can add to each object you put onto your canvas.

Tuesday 6 November 2012

How to use Azure Blob Storage with Azure Web Sites and MVC 4

I have been building a website recently using Azure Web Site hosting and ASP.NET MVC 4. As someone who doesn’t usually do web development, there has been a lot of new stuff for me to learn. I wanted to allow website users to upload images, and store them in Azure. Azure blob storage is perfect for this, but I discovered that a lot of the tutorials assume you are using Azure “web roles” instead of Azure web sites, meaning that a lot of the instructions aren’t applicable. So this is my guide to how I got it working with Azure web sites.

Step 1 – Set up an Azure Storage Account

This is quite straightforward in the Azure portal. Just create up a storage account. You do need to provide an account name. Each storage account can have many “containers” so you can share the same storage account between several sites if you want.

Step 2 – Install the Azure SDK

This is done using the Web Platform Installer. I installed the 1.8 version for VS 2012.

Step 3 – Setup the Azure Storage Emulator

It seems that with Azure web role projects, you can configure Visual Studio to auto-launch the Azure Storage emulator, but I don’t think that option is available for regular ASP.NET MVC projects hosted on Azure web sites. The emulator is csrun.exe and it took some tracking down as Microsoft seem to move it with every version of the SDK. It needs to be run with the /devstore comand line parameter:

C:\Program Files\Microsoft SDKs\Windows Azure\Emulator\csrun.exe /devstore

To make life easy for me, I added an option to my External Tools list in Visual Studio so I could quickly launch it. Once it starts up, a new icon appears in the system tray, giving you access to the UI, which shows you what ports it is running on:

 

image

Step 4 – Set up a Development Connection String

While we are in development, we want to use the emulator, and this requires a connection string. Again, most tutorials assume you are using an “Azure Web Role”, but for ASP.NET MVC sites, we need to go directly to our web.config and enter a new connection string ourselves. The connection string required is fairly simple:

<connectionStrings>
  <add name="StorageConnection" connectionString="UseDevelopmentStorage=true"/>
</connectionStrings>

Step 5 – Upload an image in ASP.NET MVC 4

This is probably very basic stuff to most web developers, but it took me a while to find a good tutorial. This is how to make a basic form in Razor syntax to let the user select and upload a file:

@using (Html.BeginForm("ImageUpload", "Admin", FormMethod.Post, new { enctype = "multipart/form-data" }))
{ 
    <div>Please select an image to upload</div>
    <input name="image" type="file">
    <input type="submit" value="Upload Image" />
}

And now in my AdminController’s ImageUpload method, I can access details of the uploaded file using the Request.Files accessor which returns an instance of HttpPostedFileBase :

[HttpPost]
public ActionResult ImageUpload()
{
    string path = @"D:\Temp\";

    var image = Request.Files["image"];
    if (image == null)
    {
        ViewBag.UploadMessage = "Failed to upload image";
    }
    else
    {
        ViewBag.UploadMessage = String.Format("Got image {0} of type {1} and size {2}",
            image.FileName, image.ContentType, image.ContentLength);
        // TODO: actually save the image to Azure blob storage
    }
    return View();
}

Step 6 – Add Azure references

Now we need to add a project reference to Microsoft.WindowsAzure.StorageClient, which gives us access to the Microsoft.WindowsAzure and Microsoft.WindowsAzure.StorageClient namespaces.

Step 7 – Connect to Cloud Storage Account

Most tutorials will tell you to connect to your storage account by simply passing in the name of the connection string:

var storageAccount = CloudStorageAccount.FromConfigurationSetting("StorageConnection");

However, because we are using an Azure web site and not a Web Role, this throws an exception ("SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used"). There are a few ways to fix this, but I think the simplest is to call Parse, and pass in your connection string directly:

var storageAccount = CloudStorageAccount.Parse(
    ConfigurationManager.ConnectionStrings["StorageConnection"].ConnectionString);

Step 8 – Create a Container

Our storage account can have many “containers”, so we need to provide a container name. For this example, I’ll call it “productimages” and give it public access.

blobStorage = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference("productimages");
if (container.CreateIfNotExist())
{
    // configure container for public access
    var permissions = container.GetPermissions();
    permissions.PublicAccess = BlobContainerPublicAccessType.Container;
    container.SetPermissions(permissions);
}

The name you select for your container actually has to be a valid DSN name (no capital letters, no spaces), or you’ll get a strange “One of the request inputs is out of range” error.

Note: the code I used as the basis for this part (the Introduction to Cloud Services lab from the Windows Azure Training Kit) holds the CloudBlobClient as a static variable, and has the code to initialise the container in a lock. I don’t know if this is to avoid a race condition of trying to create the container twice, or if creating a CloudBlobClient is expensive and should only be done once if possible. Other accesses to CloudBlobClient are not done within the lock, so it appears to be threadsafe.

Step 9 – Save the image to a blob

Finally we are ready to actually save our image. We need to give it a unique name, for which we will use a Guid, followed by the original extension, but you can use whatever naming strategy you like. Including the container name in the blob name here saves us an extra call to blobStorage.GetContainer. As well as naming it, we must set its ContentType (also available on our HttpPostedFileBase) and upload the data which HttpPostedFileBase makes available as a stream.

string uniqueBlobName = string.Format("productimages/image_{0}{1}", Guid.NewGuid().ToString(), Path.GetExtension(image.FileName));
CloudBlockBlob blob = blobStorage.GetBlockBlobReference(uniqueBlobName);
blob.Properties.ContentType = image.ContentType;
blob.UploadFromStream(image.InputStream);

Note: One slightly confusing choice you must make is whether to create a block blob or a page blob. Page blobs seem to be targeted at blobs that you need random access read or write (maybe video files for example), which we don’t need for serving images, so block blob seems the best choice.

Step 10 – Finding the blob Uri

Now our image is in blob storage, but where is it? We can find out after creating it, with a call to blob.Uri:

blob.Uri.ToString();

In our Azure storage emulator environment, this returns something like:

http://127.0.0.1:10000/devstoreaccount1/productimages/image_ab16e2d7-5cec-40c9-8683-e3b9650776b3.jpg

Step 11 – Querying the container contents

How can we keep track of what we have put into the container? From within Visual Studio, in the Server Explorer tool window, there should be a node for Windows Azure Storage, which lets you see what containers and blobs are on the emulator. You can also delete blobs from there if you don’t want to do it in code.

The Azure portal has similar capabilities allowing you to manage your blob containers, view their contents, and delete blobs.

If you want to query all the blobs in your container from code, all you need is the following:

var imagesContainer = blobStorage.GetContainerReference("productimages");
var blobs = imagesContainer.ListBlobs();

Step 12 – Create the Real Connection String

So far we’ve done everything against the storage emulator. Now we need to actually connect to our Azure storage. For this we need a real connection string, which looks like this:

DefaultEndpointsProtocol=https;AccountName=YourAccountName;AccountKey=YourAccountKey

The account name is the one you entered in the first step, when you created your Azure storage account. The account key is available in the Azure Portal, by clicking the “Manage Keys” link at the bottom. If you are wondering why there are two keys, and which to use, it is simply so you can change your keys without downtime, so you can use either.

Note: most examples show DefaultEndpointsProtocol as https, which as far as I can tell, simply means that by default the Uri it returns starts with https. This doesn’t stop you getting at the same image with http. You can change this value in your connection string at any time according to your preference.

Step 13 – Create a Release Web.config transform

To make sure our live site is running against our Azure storage account, we’ll need to create a web.config transform as the Web Deploy wizard doesn’t seem to know about Azure storage accounts and so can’t offer to do this automatically like it can with SQL connection strings.

Here’s my transform in Web.Release.config:

<connectionStrings>
  <add name="StorageConnection"
    connectionString="DefaultEndpointsProtocol=https;AccountName=YourAccountName;AccountKey=YourAccountKey"
    xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
</connectionStrings>

Step 14 – Link Your Storage Account to Your Web Site

Finally, in the Azure portal, we need to ensure that our web site is allowed to access our storage account. Go to your websites, select “Links” and add a link you your Storage Account, which will set up the necessary firewall permissions.

Now you're ready to deploy your site and use Azure blob storage with an Azure Web Site.

Thursday 1 November 2012

Using Named Branches in Mercurial

Mercurial offers a variety of approaches to branching, including “named branches”, “bookmarks” (most similar to git), “anonymous branches” and using clones. For a good comparison of these techniques, I suggest reading Steve Losh’s Guide to Mercurial Branching, which explains it well although is a little out of date now.

In this article I will walk through the process of how you can use a named branch for maintenance releases, and the workflow for both contributors and for accepting contributions. I’ll explain at the end why I’ve decided that named branches are the best option for NAudio.

Step 1: Creating an initial repository

We’ll start with a fresh repository with just two commits

hg init
// make changes
hg commit -m "first version"
// make changes
hg commit -m "version 1.0 release"

Now we’ve released version 1, let’s start work on version 2. No branches have been made yet.

// more changes
hg commit -m "beginning work on v2"

Step 2: Creating a Maintenance Branch

Now we’ve had a bug report and need to fix version 1, without shipping any of our version 2 changes. We’ll create a branch by going back to the v1.0 commit (revision 1 in our repository) and using the branch command

// go back to v1 release
hg update 1

// create a named branch
hg branch "v1.0 maintenance"

// branch will be created when we commit to it
// fix the bug
hg commit -m "bugfix for v1.0"

Step 3: Switching between branches

To get back to working on our main branch (which is called the default branch in Mercurial), we simply use the update command:

// get back onto the v2 branch:
hg update default
// make changes
hg commit -m "more work on v2"

Step 4: Making forks

Imagine at this point, we have a couple of contributors, who want to fork our repository and make some changes.

Our first contributor makes a clone, and is contributing a v2 feature, so they can simply commit to the default branch:

hg clone public-repo-address my-feature-fork
hg update default // not actually needed
hg commit -m "contributing a new feature in v2"

Our second contributor is offering a bugfix, so they must remember to switch to the named maintenance branch (they can use hg branches to see what branch names are available):

hg clone public-repo-address my-bugfix-fork
hg update "v1.0 maintenance"
hg commit -m "contributing a bugfix for v1.0"

Their commit will me marked as being on the v1.0 maintenance branch (as named branches are stored in the commits, unlike git branches which are simply pointers to commits)

Step 5: Accepting Contributions

If our contributors issued pull requests now, things would be nice and easy, but let’s imagine that more work has gone on in both branches in the meantime:

hg update default
hg commit -m "another change on v2 after the fork"

hg update "v1.0 maintenance"
hg commit -m "another v1.0 bugfix after the fork"

First, lets pull in the new v2.0 feature (n.b. it is often a good idea to use a local integration clone so that if you want to reject the contribution you can do so easily).

hg pull newfeaturefork
// need to be on the default branch to merge
hg update default
hg merge
// resolve any merge conflicts
hg commit -m "merged in the new feature"

Now we can do the same with the contribution on the maintenance branch (n.b. hg merge won’t do anything if you are still on the default branch, as it knows that the contribution is on a different branch):

hg pull bugfixfork
// get onto the branch we are merging into
hg update "v1.0 maintenance"
hg merge
hg commit -m "merged in a bugfix"

Step 6: Merging from maintenance branch into default

We have a few bugfixes now in our v1.0 branch, and we’d like to get them into v2.0 as well. How do we do that? Go to the default branch and ask to merge from the maintenance branch.

hg update default
hg merge "v1.0 maintenance"
// fix conflicts
hg commit -m "merged in v1.0 bugfixes"

And that is pretty much all you need to know to work with named branches. With your repository in this state you still have two branches (default and v1.0 maintenance) which you can continue work on separately. Here’s a screenshot of a repository which has had the steps above performed on it:

 image

Why Named branches?

I actually think that branching with clones is easier for users to understand than named branches, but for NAudio it is not a viable option. This is because CodePlex allows only one repository per project. I’d either have to create a new project for each NAudio maintenance branch, or create a fork, but both options would cause confusion.

Anonymous branches are a bad idea because you open the door to merge the wrong things together, and it’s hard to keep track of which head relates to what. Their use is mainly limited to short-lived, experimental branches.

Bookmarks are appealing as they are closest to the git philosophy, but because Mercurial requires you to explicitly ask to push them, and there can be naming collisions, I think they are best used simply as short-lived markers for local development (I might do another blog post on the workflow for this).

So with NAudio I am thinking of creating a single maintenance branch for each major release (only created when it is needed). Most people who fork can just ignore the maintenance branch and work exclusively in the default branch (I can always use hg transplant to move a fix into a maintenance branch).