Tuesday, 23 November 2010

How to Register Two Interfaces as a Singleton in Unity

Every now and then I find myself wanting to configure a single implementor for multiple interfaces in Unity. Say I have the following interfaces and I only want one instance of Foo, yet I want to be able to request either IFoo or IFoo2 from the container:

public interface IFoo
{
    ...
}

interface IFoo2 : IFoo
{
    ...
}

public class Foo : IFoo2
{
    ...
}

The way I would previously go about this is:

var foo = container.Resolve<Foo>();
container.RegisterInstance<IFoo>(foo);
container.RegisterInstance<IFoo2>(foo);

This approach, works, but is less than ideal. First, it requites you to resolve something in the container, possibly before you have completely finished configuring the container (e.g. the dependencies of Foo might not be fully set up yet). Second, the container will dispose foo twice.

However, thanks to an answer from Sven Künzler to a question on Stack Overflow, there is a much better way:

container.RegisterType<Foo>(new ContainerControlledLifetimeManager());
container.RegisterType<IFoo, Foo>();
container.RegisterType<IFoo2, Foo>();
Assert.AreSame(container.Resolve<IFoo>(), container.Resolve<IFoo2>());

You simply register the concrete type as a singleton, and then point as many interfaces at that type as you like.

Wednesday, 10 November 2010

How to Invoke a Command on the ViewModel by Pressing the Enter Key in a TextBox with Silverlight and MVVM

I recently attempted to upgrade a WPF application to compile for Silverlight as well. One of the many issues I ran into was that in the WPF version, pressing the Enter key while I was in a TextBox caused the OK button I had on the form to be clicked by virtue of the fact that I could set the IsDefault property on the button. However, after porting to Silverlight, that no longer worked, since the IsDefault property is missing..

A web-search revealed that someone had made a “behavior” that allows a specified button to be clicked when you press Enter within a TextBox (available for download here). However, it had one big problem: at the point that the command fired in my ViewModel, the value bound to the textbox contents had not been updated, since the textbox had not lost focus.

The original WPF binding I had an UpdateSourceTrigger ensuring that the ViewModel was always kept up to date with what was in the TextBox:

<TextBox Text="{Binding Answer, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" />

but in Silverlight, the PropertyChanged UpdateSourceTrigger is not available, so  I was left with the following:

<TextBox Text="{Binding Answer, Mode=TwoWay}" />

I decided to make my own EnterKeyCommand binding that would allow you to specify for a TextBox which command on the ViewModel should be run. Here’s the code:

public static class EnterKeyHelpers
{
    public static ICommand GetEnterKeyCommand(DependencyObject target)
    {
        return (ICommand)target.GetValue(EnterKeyCommandProperty);
    }

    public static void SetEnterKeyCommand(DependencyObject target, ICommand value)
    {
        target.SetValue(EnterKeyCommandProperty, value);
    }

    public static readonly DependencyProperty EnterKeyCommandProperty =
        DependencyProperty.RegisterAttached(
            "EnterKeyCommand",
            typeof(ICommand),
            typeof(EnterKeyHelpers),
            new PropertyMetadata(null, OnEnterKeyCommandChanged));

    static void OnEnterKeyCommandChanged(DependencyObject target, DependencyPropertyChangedEventArgs e)
    {
        ICommand command = (ICommand)e.NewValue;
        FrameworkElement fe = (FrameworkElement)target;
        Control control = (Control)target;
        control.KeyDown += (s, args) =>
        {
            if (args.Key == Key.Enter)
            {
                // make sure the textbox binding updates its source first
                BindingExpression b = control.GetBindingExpression(TextBox.TextProperty);
                if (b != null)
                {
                    b.UpdateSource();
                }
                command.Execute(null);
            }
        };
    }
}

Most of it is pretty simple, and it will allow an Enter key command to be specified for any control, not just textboxes. However, if you have bound to a textbox, it will call UpdateSource on any Text binding you have made, to ensure your ViewModel operates on the latest data.

Here’s how you use it in XAML:

<TextBox 
    Text="{Binding Answer, Mode=TwoWay}" 
    my:EnterKeyHelpers.EnterKeyCommand="{Binding SubmitAnswerCommand}"/>

It also has the advantage of being considerably more succinct than the equivalent XAML for using the behavior I linked to earlier.

Monday, 8 November 2010

Merging MP3 Files with NAudio in C# and IronPython

If you would like to concatenate MP3 files using NAudio, it is quite simple to do. I recommend getting the very latest source code and building your own copy of NAudio, as this will work best with some of the changes that are in preparation for NAudio 1.4.

Here’s the C# code for a function that takes MP3 filenames, and writes a combined MP3 to the output stream:

public static void Combine(string[] inputFiles, Stream output)
{
    foreach (string file in inputFiles)
    {
        Mp3FileReader reader = new Mp3FileReader(file);
        if ((output.Position == 0) && (reader.Id3v2Tag != null))
        {
            output.Write(reader.Id3v2Tag.RawData, 0, reader.Id3v2Tag.RawData.Length);
        }
        Mp3Frame frame;
        while ((frame = reader.ReadNextFrame()) != null)
        {
            output.Write(frame.RawData, 0, frame.RawData.Length);
        }
    }
}

And here’s an IronPython script (just put NAudio.dll in the same folder as the mp3merge.py script):

import clr
clr.AddReference('NAudio.dll')

import sys
from NAudio.Wave import Mp3FileReader
from System.IO import File

def GetAllFrames(reader):
    while True:
        frame = reader.ReadNextFrame()
        if frame:
            yield frame
        else:
            return

def Merge(files, outputStream):
    for file in files:
        with Mp3FileReader(file) as reader:
            if reader.XingHeader:
                print 'discarding a Xing header'
            if not outputStream.Position and reader.Id3v2Tag:
                outputStream.Write(reader.Id3v2Tag.RawData, 0, reader.Id3v2Tag.RawData.Length)                
            for frame in GetAllFrames(reader):
                outputStream.Write(frame.RawData, 0, frame.RawData.Length);
            
if __name__ == '__main__':
    if len(sys.argv) < 3:
        print "Usage: ipy mp3merge.py output.mp3 File1.mp3 File2.mp3"
    else:
        with File.OpenWrite(sys.argv[1]) as outStream:
            Merge(sys.argv[2:],outStream)

Notes:

I simply copy across the ID3v2 tag from the first MP3 file if present. All other ID3v2 tags are discarded (as are ID3v1 tags). Also, I discard the Xing frame from VBR files. It could easily be re-included if desired, although it’s information will not necessarily be valid about the combined MP3 file. One final thing, I wouldn’t recommend merging MP3 files of different sample rates, or mixing mono with stereo, as it could cause various players issues.

Sunday, 7 November 2010

State of MP3 Playback Support in NAudio

The MP3 playback support in NAudio was always rather experimental. The ACM conversion code I had written assumed CBR (constant bit rate) and constant block sizes. With MP3s this is not always the case, since there are VBR files with variable block sizes, and even in CBR MP3 files, padding means you can get frames of different sizes. However, despite these issues I did manage to get MP3 more or less playing back, which was cool, but not 100% reliable. People ran into issues like the occasional error while repositioning a stream, or the more irritating fact that Mp3FileReader was not good at calculating the duration of a file.

In this post I will go over some of the key challenges to getting good MP3 playback support, with details of some recent changes I have checked in, along with some ideas for the future.

1. Correctly parsing MP3 frame headers

To work effectively with MP3 files you need to be able to parse frame headers correctly and determine their exact size. If this cannot be done, we have no choice but to pass blocks of MP3 file directly to the decoder without knowing whether we are giving it whole frames or not.

Finding out how to properly parse MP3 frame headers was a much harder challenge than it seemed. Googling for info on MP3 frames reveals some articles that look authoritative but simply failed to correctly parse everything I threw at them. Mono or low sample rate MP3s got their frame size calculation wrong. However, eventually I found this article, whose source code had the final missing piece of information that allowed me to get it reliable.

2. A single frame decoder

Once we can calculate frame sizes reliably, we are able to decode them one by one. The WaveFormatConversionStream assumes everywhere it is working with CBR, so I have removed it from the equation. Now a simple MP3 Frame Decoder class (final name and interface to be decided) is used to decode MP3 frames one at a time using ACM. Alternative frame decoders could easily be plugged in if required in the future (e.g. using DMO or NLayer).

The really big change is that this means that the MP3FileReader no longer returns MP3 data in its Read method, but emits PCM. This makes life so much easier downstream and simplifies the playback graph considerably (no more BlockAlignReductionStream). I’ve made the ReadFrame method public so if you have a pressing need to get the compressed data out instead there is nothing stopping you.

3. Accurate length reporting

Accurate length reporting was never possible before, since it relied on an estimate of the bitrate. But now we can parse MP3 frames, we have accurate knowledge of exactly how many samples each frame will decompress into, and the TotalTime property (and CurrentTime property) of MP3FileReader should be entirely accurate. n.b. I think that it may be possible that the first frame in a VBR MP3 file decompresses to zero samples (although I already exclude the Xing frame so maybe there is another similar meta-data type frame), so we might actually very slightly over-report the length – I’ll need to look into that.

4. Repositioning to frame granularity

When you reposition with the MP3FileReader.Position property, there is of course every possibility that you will ask for it to reposition to a place midway through a frame. We now automatically move you to the start of the frame that contains the position you asked for.

An earlier NAudio contributor had done some cool stuff with a BinarySearch to speed this up. I needed to drop this temporarily as I was making changes to the table of contents generation. However, there is no reason why this could not be reinstated now, to speed up performance further (although repositioning perf doesn’t seem to be a major issue with the tests I have done on 1 hour long MP3s).

5. Repositioning with sample granularity

Obviously, it would be even nicer to support MP3 repositioning with sample granularity. This would involve us decompressing a frame during the seek process, so when the next Read occurs we can read from part-way through that frame. The framework to do that is already in place (we keep track of “leftovers”, so this could be a feature I add in the not too distant future).

6. Forward only

One thing I haven’t got round to doing yet, is making it possible to use MP3FileReader from an input stream that doesn’t support repositioning. Obviously, it would not be able to work out its Length, and there would be no real need for a TOC. Proper support for forward only streams would be useful to people wanting to do network streamed playback, which seems to be one of the most common queries I get.

7. Changes of Sample Rate and Number of Channels

It is theoretically possible within an MP3 file for the sample rate and number of channels to change from frame to frame. However, I have no immediate plans to support this since I’ve never seen an MP3 file that does this. The only scenario in which I could imaging this would be if someone was attempting to concatenate two MP3 files by simply copying frames from one into the other.

8. ID3 Tag Support

I have no plans to introduce ID3 tag reading or writing, since there are other open source libraries out there that do this perfectly well.

Give me feedback

Please grab the latest NAudio code from Codeplex and let me know how you get on with it. As always, the best way to give it a run-through is to use the NAudioDemo app that is included in the solution. Load it up, select WAV playback and try it with whatever MP3s you have lying around on your hard disk. It would be great to have robust MP3 playback as a headline feature for NAudio 1.4.

Tuesday, 2 November 2010

Why you should use DVCS for Personal Projects

For a long time it bugged me that there wasn’t an easy way for me to use version control for my personal projects. I have over 100 small applications sitting around on my computer. Most of them are just test apps that will probably never be visited again. But others are more useful utilities, or perhaps future open source projects currently in incubation. Some of them are work related, others purely for personal enjoyment or learning. Often I found myself wishing that I could have the benefits of source control, so I could back out of changes that broke something.

Pre Distributed Version Control Systems


In the days before DVCS (or, to be more accurate, before I knew about DVCS), at different times, I tried all the following approaches:
  
Store projects on my company’s VCS. This usually involved asking permission to have some space on SourceSafe or TFS for my personal projects. Whilst this has the benefit of meaning that I can share my work with others easily (and it is guaranteed to be backed up), there is the hassle of getting this set up in the first place, plus the fact that some of these projects are very shortlived, while others are “skunkworks” ideas which you don’t want to give publicity to until they are ready for it.

Run a private VCS server on my dev machine. It is possible to install Subversion, SourceSafe, TFS etc servers on your local machine, and use them for VCS. However, as well as using up valuable resources on your personal machine, it has a very poor migration story. If you rebuild your PC, need to quickly copy code onto a USB stick and work on it from home, this option ends up being more hassle than it is worth.

Subversion file-based repository. A few years ago I discovered that you could get Subversion to back up to a file:// path. I thought this would be the answer to all my issues. The reality is, that it was quite fragile, especially since I was storing code on USB sticks, so the drive letter might change. I ended up corrupting my repositories so regularly I gave up on this option pretty quickly.

Make it open source. When CodePlex showed up, I immediately moved several of my projects there. This meant I had access to a free central repository, enabling me to work from different computers if I needed to. The downside is that most of my projects weren’t appropriate for making into open source applications.

Don’t bother with version control and make backup zip files. This ended up being my most common approach. Every now and then I would backup to a zip file. Of course, those backups are few and far between and don’t even exist for most projects. And I almost never had a backup available on those few occasions when I genuinely needed one.

Advantages of DVCS


But all that has changed after I decided to find out what all the fuss was about with Distributed Version Control systems. Whilst the idea seemed a little crazy to me at first (everyone gets a copy of the entire repostory?), the obvious advantages for personal projects won me over pretty quickly. I decided to try out Mercurial, since it seemed to have slightly better support on Windows, although I’m sure Git is just as good. Here’s some of the top advantages to using it on your personal projects:

No Server Required. This is a huge benefit. I don’t need a central server on the internet, or on my company network. If I want to move to another PC, I can just copy the code folder over (or Sync folders using something like DropBox) and it just works.

Version Control Everything. It’s now a no-brainer to put a new test project under version control. It takes only a few seconds to do. If for some reason you decide you don’t want version control anymore, just delete the .hg folder and its gone.

Migrate to a Central Repository Later. When I added NAudio to Codeplex, it already had been in development for several years. However, I have no checkin history up to that point, just a bunch of backup zip files. With Mercurial, you can move to using a centralised repository at any point in the future (whether public or private) and all your checkin history comes along for the ride.

Unconstrained branching strategy. Admittedly, for small personal projects, branching is not often that important. But it can come in handy when are half way through implementing one feature, and then want to work on a different task. Without version control, you have to decide whether to bin the half-finished changes, or to copy them somewhere else and manually merge them back in later. With Mercurial, it becomes trivial to create as many branches as you need. And you can merge directly between any two branches, irrespective of how many intermediate branches were created between them.

Merging divergent copies. Sometimes I have a copy of a personal project at home and at work and have no idea which one is the latest and greatest. Or maybe after backing it up to a few places, I have inadvertently made changes to two separate copies. One membership application I wrote for a youth group 8 years ago turns out to be still in use and they asked me for new features recently. I had to work out which of several copies was the one I should be using. With Mercurial, it is trivial to ensure that one copy is not missing any changes in another.

Little and often checkins. One really nice feature for my open source projects is that I can check in little changes without needing to immediately push them to the central server. This means I can check in little and often, and only do a push to the server once I have tested and made sure my feature is robust.

What was I doing?. Another advantage is that by using a DVCS with my personal projects, if I need to come back to one after a couple of years I can quickly examine the log, looking at diffs to see what I was up to last time I worked on it. This can be handy if I had left it in a state where there were some half-finished new features in progress, and actually I want to discard them and resume from an earlier point.

Trivial rollback – one of the things that scared me about DVCS was the idea that once I had checked in a file, it lives on in the repository forever. So accidentally checkin several megabytes of compiled binaries and you have an unnecessarily bloated repository. But the reality is that issue only exists if you push that to a central repository (and even then there are usually ways of working round the issue). If you haven’t you can just clone to the prior revision and your mistake is gone. I’ll perhaps do a post later on my thoughts about DVCS in the enterprise, where matters like this are quite important.

If you take anything away from this post, it is learn a DVCS and use it wherever you can. You will be glad you did.

Friday, 29 October 2010

MVVM – Is it Worth the Pain?

After finding it very easy to get MVVM working in WPF with IronPython, I thought it would be trivial to achieve the same thing in Silverlight. Unfortunately, my bindings didn’t work at all after porting a simple game to Silverlight. The problem may have something to do with the issues described in this blog post, but since I needed to demo my code for a local user group event, I abandoned MVVM and went back to just talking directly to the controls from within my ViewModel, which worked perfectly, and simplified my code considerably. It made me wonder why I was using MVVM in the first place.

The Pain of MVVM

I jumped on the bandwagon fairly early and have been writing the majority of my WPF and Silverlight code using MVVM. But to be honest, it has not been plain sailing. There is a real elegance to it that I like, and I love the testability, but there have been plenty of pain points.

For example, triggering animations and discovering when they have finished from the ViewModel has been a nightmare. I’ve tried various solutions, finding one that works with Silverlight and one with WPF but none that work with both, and have managed to crash Visual Studio dozens of times in the process.

Even simple things like setting the focus to the appropriate controls requires a rather convoluted system of “behaviours”, to do something that is a single line of code with access to the controls themselves. And every time I create a behaviour I seem to spend ages debugging my dependency properties, trying to work out why they aren’t doing what I expected.

The Purpose of MVVM

So if MVVM is so painful for non-trivial applications, why use it at all? What’s the point of MVVM? I think there are three answers.

1. Testability

The first driver for using MVVM is testability. If we put our logic in the C# “code-behind” of a XAML class, then running unit tests on that logic becomes very difficult. But with IronPython, that consideration is irrelevant. Since it is a dynamic language, and we are not using “code-behind” in any case, my non-MVVM version of my ViewModel is just as testable as the one that used data binding exclusively.

2. Data Binding

The second motivation for using MVVM is that the pattern encourages the use of data binding, and simplifies it considerably by removing the need for data converters or complicated binding expressions. But why is data-binding to a TextBox better than just getting or setting its Text property?

One of the key benefits of data-binding comes when we want to bind the same thing to more than one property. A Command object is a good example. One Command object could be bound to a drop-down menu and a toolbar button. So setting the Command’s IsEnabled state can be used to enable and disable both views. But most of the time, binding is a one-to-one relationship.

Data binding also saves a bunch of code in CRUD type applications where you are binding directly to properties on an existing model type, but I don’t tend to write applications like that anyway. I think you can use it to make magical things happen with input validation too, but again, it’s not something I’ve ever needed.

3. Designability

The third driver behind MVVM is the goal of designer / developer separation. The idea is that a designer can create a XAML file and bind it to test data, and then the developer can come along and create the real DataContext at a later date. The idea certainly sound impressive, but I can’t help but think that the designer will run into all the problems mentioned above if he actually wants to give his XAML GUI a real workout. And the truth is that I’ve never worked with a designer writing XAML and will not be doing so in the foreseeable future. The graphic designers we use send us bitmaps and Flash mockups.

A Hybrid Approach

I am now wondering whether I should think of my ViewModel more as a “Presenter”. It could still be the DataContext of the object defined in XAML to enable data-binding where required, but it would also have access directly to any properties on that XAML object. For testability in statically typed languages, you could define an interface with methods like StartAnimation or SetFocusToTextBox – a little cumbersome perhaps, but probably no more so than the hoops you have to jump through to make these things happen from your ViewModel using nothing but data binding.

Anyway, looks like I’m not alone in questioning MVVM. Here’s a good post from K Scott Allen. What do you think? Does MVVM fail a cost-benefit analysis for the types of application you work on?

Thursday, 28 October 2010

Creating Silverlight Apps with IronPython

Using IronPython to create Silverlight applications is a little different from using C#. With C# you are building a .xap file, which contains all the compiled code for your application. However, in IronPython, the recommended way is not to create a .xap file at all. Instead you simply write Python script in your HTML page, or in .py files hosted on your web server.

Getting Started

The easiest way to get started is by using the latest IronPython (currently 2.7 beta 1) which comes with Visual Studio 2010 integration. This enables you to simply create a new “IronPython Silverlight Web Page” project. However, don’t worry if you don’t have or want to use VS2010 – the process is almost the same without it.

When you create your new Silverlight project, it creates two files for you. The first is a simple HTML page:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <script type="text/javascript">
        window.DLR = { settings: { console: true } }
    </script>
    <script src="http://gestalt.ironpython.net/dlr-latest.js" type="text/javascript"></script>
    <title>SilverlightPage1</title>
</head>
<body>
    <input id="Button1" type="button" value="Say, Hello!" />
    <script type="text/python" src="SilverlightPage1.py"></script>
</body>
</html>

Including the DLR

There are a few interesting points of note in the <head> section of this page. First is the dlr-latest.js script file. This is all that is needed to enable python (and xaml) scripting directly in HTML. The settings: { console: true } section enables a very cool debugging feature in your web page, whereby you get an IronPython interactive interpreter that you can pop up from the bottom of your web page:

IronPython Interactive Interpreter

Running a .py Script

Then we have the Python script itself. The project template includes a Python file containing some basic code to subscribe to the button onclick event and show an alert.

def SayHello(s,e):
    window.Alert("Hello, World!")
document.Button1.events.onclick += SayHello

Embedding Python in the HTML

There is in fact no need for our Python script to be in a .py file if we don’t want to. We can put Python code directly in the HTML inside a <script> block if we want:

<body>
    <input id="Button1" type="button" value="Say, Hello!" />
    <script type="text/python">
    def SayHello(s,e):
        window.Alert("Hello from HTML")
 
    document.Button1.events.onclick += SayHello
    </script>
</body>

Running the Page

When you run the page from Visual Studio, it starts the “Chiron Development Server” to host your page. You don’t need this to develop for IronPython, but one thing I discovered is that you do need the MIME type to be set up correctly for .py files. For example, if you are using WebMatrix you need to put a web.config file in your application root folder with the following content:

<configuration>
    <system.webServer>
        <staticContent>
            <mimeMap fileExtension=".py" mimeType="text/python" />
        </staticContent>
    </system.webServer>
</configuration>

Sadly I couldn’t work out a way of getting Visual Studio to break on specific lines of Python script despite trying attaching to Chiron and to my Web Browser. I guess it may be possible to step through the script with a debugger for the browser, but I haven’t tried that yet.

Using XAML

Obviously XAML is very important for Silverlight applications and it is very easy to use with the DLR. We can start by adding a simple XAML file to our project. Annoyingly, the Visual Studio add item dialog filters out all the templates except the IronPython ones. Hopefully that will be fixed in a future release.

We’ll create a MainForm.xaml file with the following basic content:

<UserControl 
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
    <StackPanel Background="BurlyWood">
        <TextBlock Margin="5" Text="Enter your name:"/>
        <TextBox Margin="5" x:Name="textBoxName"/>
        <Button Margin="5" x:Name="buttonOK" Content="OK" />
    </StackPanel>
</UserControl>

Now in our HTML page, all that is needed is to specify the XAML and the size of the Silverlight object that will host it:

<script id="MainForm" type="application/xml+xaml" width="200" height="100" src="MainForm.xaml"></script>

As before, there is no need for the XAML to be in a separate file. We can put it directly within the script block if needed. Here’s our app running:

IronPython Silverlight App

Now suppose we want to write some Python code to run when the OK button is clicked instead of the “Say, Hello!” button. We can use Python script in the way already shown, but need to mark the <script> tag with class=”MainForm” in order to ensure that it runs against the correct Silverlight object that contains our MainForm UserControl. This is because the DLR supports multiple Silverlight controls on your page at a time, and by default will create a new one for each xaml script. The xaml variable is pre-loaded with the root element of the loaded xaml – in our case, a UserControl.

<script class="MainForm" type="text/python">
def SayHello(s,e):
    window.Alert("Hello " + xaml.textBoxName.Text)

xaml.buttonOK.Click += SayHello
</script>

Now when we run this, it works (sort of). In FireFox, the alert appears but you have to wait several minutes before it lets you click on it. I have no idea why. IE8 works OK so long as you are running against the latest gestalt (otherwise the xaml is None so the subscription to the button click event fails).

Going Further

Obviously this only covers the absolute basics. Visit http://www.ironpython.net/ironpython/browser/ for more documentation and instructions. I may blog a bit more on this at a later date.

Wednesday, 27 October 2010

IronPython 2.7 Visual Studio Integration

I’ve been trying out IronPython 2.7 beta 1, which includes Visual Studio 2010 integration and I’ve been very impressed. Here’s a brief rundown of the features (at least the ones I’ve found).

Project Templates

Four new project templates are included – these are a Console Application, a WinForms application, a WPF application and finally a Silverlight web page. The templates each have just enough code to get you started.

Project Templates

The Code Editor

As you might expect there is syntax highlighting for Python files:

Syntax Highlighting

Another nice touch is that pressing backspace with the cursor at the start of a line of code moves it in one level of indentation. What I wasn’t expecting was intellisense, but it’s available in some places (though not everywhere since the dynamic nature of Python means you can’t guarantee that a variable will keep the same type that it was initially assigned).

Intellisense

Another nice touch is red squiggly lines indicating syntax errors – very useful if like me you keep forgetting the colon at the end of if statements:

Syntax Errors

Also the Navigation Bar is populated with the classes and methods in your source file:

Navigation Bar

Another very nice touch is that the Find All References and Go To Definition both work. Also the Object Browser includes classes and methods from your IronPython source files.

Solutions

When you create an IronPython project it creates a regular .sln file, but your project is a .pyproj file (which like .csproj is an msbuild project file under the hood). It doesn’t have too many configuration options, but it does allow you to add search paths for external python scripts or .NET libraries. You can also specify which your startup file is.

Startup File

Search Path can also be configured in the Solution Explorer:

Search Path

The Debugger

One of the most compelling reasons to use Visual Studio to write your IronPython apps is support for the debugger. You can set breakpoints and step through your code:

Breakpoints

Sadly tooltips showing the values of variables don’t seem to be implemented. The only tooltips I could get up was by hovering over the open quotes of a string literal. Also you can’t get at variable contents using the Immediate or Watch windows. Neither can you use Python variables or expressions for breakpoint conditions.

Watch window

However, the Locals window does come to the rescue and can be used to examine the contents of variables while you are at a breakpoint.

Locals Window

The IronPython Interactive Window

Another great feature is the IronPython Interactive Window. This is the same old familiar interactive IronPython interpreter you get when you type ipy.exe, except that it has a few tricks up its sleeve. It supports both syntax highlighting and intellisense, which works even better here since it can look at the live objects rather than having to analyse your source code.

IronPython Interactive Window

There is an option on the debug menu to execute your application in IronPython Interactive:

Execute Project in IronPython Interactive

This brings up the IronPython interactive window and attempts to run your project in it. You can even change scope, and investigate the value of variables in the various scopes of the different modules of your application. Here’s it running the simple test app I showed earlier:

IronPython Interactive

There are still a couple of issues using this for debugging. First, it doesn’t apply the Search Path to the interactive session so import statements fail. Second, running in this mode doesn’t seem to invoke the debugger, so you won’t stop at breakpoints. Hopefully these issues will be addressed in a future update of the tools.

Another nice feature is the ability to select IronPython code in the editor, right-click it and say – Send to Interactive. You need to be on a blank line in the interactive console though or it can get a bit confused. But this is a great way to get longer functions into the interactive window.

Send to Interactive

Conclusion

The Visual Studio support for IronPython is very impressive. It still needs a few enhancements, particularly in the area of debugging, but there are still plenty of advantages to using Visual Studio for IronPython development over simply working directly with files.

Since this post is fairly long, I’ll save comments on Silverlight with IronPython for another time.

Tuesday, 26 October 2010

Change to Solution Folder in Package Manager Console

When you install nupack (which seems likely to be renamed nuget in the near future), you get a new dockable Visual Studio 2010 window called the Package Manager Console which allows you to run nupack commands right from within VS2010.

nuget-console

The great thing about this window is that it can be used for more than just nupack commands. It is a fully working PowerShell window and all the commands on your path are also available. For example, I use Mercurial on a number of my applications, so it allows me to input commands such as hg add or hg commit directly within the console window.

There was just one slight snag, and that is that the current working directory of the Package Manager Console seems to default to your user account:

PM> pwd
Path
----
C:\Users\Mark

So, despite knowing virtually nothing about PowerShell, I set about working out how I could automate this process. The first thing I discovered was that you can query PowerShell for all the variables that are available using the Get-Variable command. This will show the names and current values of all variables.

Sadly, there seemed to be none containing the path of the loaded solution, but asking a question on StackOverflow pointed me in the right direction. There is a variable called $dte which is a COM object allowing automation of the VS development environment. We can ask this for the path of the loaded solution:

PM> $dte.Solution.FileName
C:\Users\Mark\Code\TestApp\TestApp.sln

We now need to find a way to strip off the filename to get the folder. You can list all available PowerShell commands with by typing Get-Command. Eventually after some searching on Google I found a way to strip the filename off this path:

PM> Split-Path -parent $dte.Solution.FileName
C:\Users\Mark\Code\TestApp

To change to this folder you take this string and pipe it into the cd command as follows:

PM> Split-Path -parent $dte.Solution.FileName | cd
PM> pwd

Path
----
C:\Users\Mark\Code\TestApp

Mission almost accomplished, but that is a rather cumbersome command to remember. I wanted to make a PowerShell command immediately available to me. This requires editing my PowerShell user profile. The path to this is found in the $profile variable:

PM> $profile
C:\Users\Mark\Documents\WindowsPowerShell\NuPack_profile.ps1

This file didn’t actually exist (nor did the WindowsPowerShell folder), so I had to create a blank one. Then, I added my snippet of script into a function:

nuget-powershell

Having done that, you need to reload Visual Studio, and after loading a solution, you can type solutionFolder to navigate to the solution folder:

PM> solutionFolder
PM> pwd

Path
----
C:\Users\Mark\Code\TestApp

And that's it. Now I can run my Mercurial commands from within the Package Manager Console:

PM> hg status
M UnitTests\UnitTests.csproj
? UnitTests\Thread.cs

Saturday, 9 October 2010

WPF Collision Detection with Canvas and ScaleTransform

As part of my continuing exercise learning IronPython, I ported an old Windows Forms game I had up on CodePlex to WPF. The game is called Asterisk, and was a favourite of mine when I was about 8 years old on the BBC Micro. The objective is simply to avoid the stars and get through the gap on the right hand side. The only control is to hold down a key to make the line go up instead of down.

Here’s what the current WPF version looks like:

WPF Asterisk

One of the challenges was implementing collision detection. I needed to check if the current point intersected with a star or not. I initially hoped I could use Andy Beaulieu’s Silverlight collision detection code, but sadly that uses VisualTreeHelper.FindElementsInHostCoordinates, which isn’t available in WPF.

However, a solution was readily at hand with VisualTreeHelper.HitTest. I rather stupidly failed to notice that there was a much simpler overload than the first search result on MSDN, so my initial Python code was over-complicated:

def CheckCollisionPoint(point, control):
    hits = []
    def callbackFunc(hit):
        hits.append(hit.VisualHit)
        return HitTestResultBehavior.Stop
    callback = HitTestResultCallback(callbackFunc)
    VisualTreeHelper.HitTest(control, None, callback,
        PointHitTestParameters(point))    
    return len(hits) > 0

Using a more appropriate HitTest overload simplifies things greatly:

def CheckCollisionPoint(point, control):
    hit = VisualTreeHelper.HitTest(control, point)
    return hit != None

However, there were still two difficulties. First was that point was coordinates relative to the canvas, whilst the control was a star object placed onto that canvas. This was simple enough to fix – I just needed to offset the coordinates by the Canvas.Top and Canvas.Left attached properties of the star.

However, problems came when I attempted to change the size of the stars with a scale transform:

<Path xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  Stroke="White" 
  StrokeThickness="2" 
  StrokeStartLineCap="Round" 
  StrokeEndLineCap="Round" 
  StrokeLineJoin="Round" 
  Data="M 0,0 l 5,0 l 2.5,-5 l 2.5,5 l 5,0 l -3.5,5 l 1,5 l -5,-2.5 l -5,2.5 l 1,-5 Z">
  <Path.RenderTransform>
    <ScaleTransform ScaleX="0.8" ScaleY="0.8" />
  </Path.RenderTransform>
</Path>

This caused collision detection to break because HitTest does not take the RenderTransform into account. I initially fixed this by dividing the point coordinates by the scale factor from the XAML. However, I then came across this blog post which demonstrates a better approach that will cope with any kind of transforms, including rotations. You can get an inverse transform from the render transform, and apply that to your point. So the final version of my WPF hit-test function in IronPython is as follows:

def CheckCollisionPoint(point, control):
    transformPoint = control.RenderTransform.Inverse.Transform(point)
    hit = VisualTreeHelper.HitTest(control, transformPoint)
    return hit != None

If you have IronPython installed, download the code and try it yourself.

Wednesday, 6 October 2010

WPF and MVVM in IronPython

I’ve been getting to grips with IronPython recently, and wanted to see how easy it would be to use the MVVM pattern. What we need is a basic library of MVVM helper functions. First is a class to load an object from a XAML file.

import clr
clr.AddReference("PresentationFramework")
clr.AddReference("PresentationCore")

from System.IO import File
from System.Windows.Markup import XamlReader

class XamlLoader(object):
    def __init__(self, xamlPath):
        stream = File.OpenRead(xamlPath)
        self.Root = XamlReader.Load(stream)
        
    def __getattr__(self, item):
        """Maps values to attributes.
        Only called if there *isn't* an attribute with this name
        """
        return self.Root.FindName(item)

In addition to loading the XAML, I’ve added a helper method to make it easy to access any named items within your XAML file, just in case the MVVM approach is proving problematic and you decide to work directly with the controls.

Next we need a base class for our view models to inherit from, which implements INotifyPropertyChanged. I thought it might be tricky to inherit from .NET interfaces that contain events, but it turns out to be remarkably simple. We just inplement add_PropertyChanged and remove_PropertyChanged, and then we can raise notifications whenever we want.

from System.ComponentModel import INotifyPropertyChanged
from System.ComponentModel import PropertyChangedEventArgs

class ViewModelBase(INotifyPropertyChanged):
    def __init__(self):
        self.propertyChangedHandlers = []

    def RaisePropertyChanged(self, propertyName):
        args = PropertyChangedEventArgs(propertyName)
        for handler in self.propertyChangedHandlers:
            handler(self, args)
            
    def add_PropertyChanged(self, handler):
        self.propertyChangedHandlers.append(handler)
        
    def remove_PropertyChanged(self, handler):
        self.propertyChangedHandlers.remove(handler)

The next thing we need is a way of creating command objects. I created a very basic class that inherits from ICommand and allows us to run our own function when Execute is called. Obviously it could easily be enhanced to properly support CanExecuteChanged and command parameters.

from System.Windows.Input import ICommand

class Command(ICommand):
    def __init__(self, execute):
        self.execute = execute
    
    def Execute(self, parameter):
        self.execute()
        
    def add_CanExecuteChanged(self, handler):
        pass
    
    def remove_CanExecuteChanged(self, handler):
        pass

    def CanExecute(self, parameter):
        return True

And now we are ready to create our application. Here’s some basic XAML. I’ve only named the grid to demonstrate accessing members directly, but it obviously is not good MVVM practice.

<Window
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Title="IronPython MVVM Demo"
    Width="450"
    SizeToContent="Height">
    <Grid Margin="15" x:Name="grid1">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto" />
            <RowDefinition Height="Auto" />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="Auto" />
            <ColumnDefinition Width="*" />
        </Grid.ColumnDefinitions>
        <Label Grid.Row="0" Grid.Column="0" FontSize="24" Content="First Name:" />
        <Label Grid.Row="0" Grid.Column="1" FontSize="24" Content="{Binding FirstName}" />
        <Label Grid.Row="1" Grid.Column="0" FontSize="24" Content="Surname:" />
        <Label Grid.Row="1" Grid.Column="1" FontSize="24" Content="{Binding Surname}" />
        <Button Grid.Row="2" FontSize="24" Content="Change" Command="{Binding ChangeCommand}" />
    </Grid>
</Window>

Now we can make our ViewModel. It will have FirstName and Surname attributes as well as an instance of our Command object:

class ViewModel(ViewModelBase):
    def __init__(self):
        ViewModelBase.__init__(self)
        self.FirstName = "Joe"
        self.Surname = "Smith"
        self.ChangeCommand = Command(self.change)
    
    def change(self):
        self.FirstName = "Dave"
        self.Surname = "Brown"
        self.RaisePropertyChanged("FirstName")
        self.RaisePropertyChanged("Surname")

And finally we are ready to create our application. Simply load the XAML in with the XamlLoader and set the DataContext. I also demonstrate setting the background colour here, to show how easy it is to access named elements in the XAML:

from System.Windows import Application
from System.Windows.Media import Brushes

xaml = XamlLoader('WpfMvvmDemo.xaml')
xaml.Root.DataContext = ViewModel()
xaml.grid1.Background = Brushes.DarkSalmon
app = Application()
app.Run(xaml.Root)
Now we run it and see:

mvvm-python-1

And click the button to see:

mvvm-python-2

And that’s all there is to it. It may even be simpler than doing MVVM from C#.

Monday, 4 October 2010

Clipboard Access in IronPython

I’m slowly getting into the swing of using IronPython to write small scripts to help me do things quicker. I’ve been updating some of the old posts on this blog to use syntaxhighlighter which is a much cleaner solution than the old mechanism. However, it did require a little bit of text manipulation of each code sample before updating it on the blog. The code simply reads the code snippet out the clipboard, performs a few basic transforms and then copies the transformed code back to the clipboard.

import io
import clipboard
code = clipboard.getText()
brush = 'csharp' if not code[0] == '<' else 'xml'
substitutions = [('&','&amp;'),('<','&lt;'),('>','&gt;')]
for a,b in substitutions:
    code = code.replace(a,b)
print code
code = '<pre class="brush: %s">%s</pre>' % (brush, code)
clipboard.setText(code)

The clipboard module is my own very small wrapper, which makes use of the Clipboard object from System.Windows.Forms:

import clr
clr.AddReference('System.Windows.Forms')
from System.Windows.Forms import Clipboard

def setText(text):
    Clipboard.SetText(text)
    
def getText():
    return Clipboard.GetText()

Friday, 1 October 2010

Installing Windows Virtual PC on Windows 7 Home Premium

Microsoft have replaced Virtual PC 2007 with “Windows Virtual PC”, but theoretically it is only supported with Windows 7 Professional and above. However, if you head over to the Windows Virtual PC website, and say that you have Windows 7 Professional, it enables the downloads of Windows XP Mode and Windows Virtual PC. You only need to download the Virtual PC part, which arrives as the rather cryptically named Windows6.1-KB958559-x86.msu. Simply double-click to install.

Once installed you may, like me, run into the issue that hardware assisted virtualization is not enabled in your BIOS. I have a Dell laptop, and it was a matter of hitting F12 on bootup and searching around for the option in the BIOS settings. More info on enabling HAV.

The final step was to load up one of the old Virtual PC 2007 vmc files I had lying around. This went smoothly, although my virtualized XP did attempt and fail to install new device drivers when it booted up. Windows Virtual PC adds a Virtual Machines folder under your user account from which you can set up a new virtual machine if you require. I haven’t done this yet, but I might give it a go later and attempt to get Ubuntu running (something which was notoriously difficult under VPC 2007 so hopefully the process is a bit smoother now).

Anyway, it’s nice that Microsoft have made this tool available for free, which is very useful for software testing, and even better that its usable on Win 7 Home Premium, without having to upgrade to Professional.

Thursday, 30 September 2010

Asserting Function Calls in Python

One of the nicest features of Python is “duck typing”, which means you don’t need to create interfaces to allow you to swap out implementations. Instead you simply create a different object that has the functions you need.

One really powerful use of this is in unit testing, allowing you to create lightweight replacements for dependencies without the need for a powerful mocking framework. Having said that, sometimes you need to be able to do things like checking that a function was called on an existing object. I asked about this on StackOverflow, and got a variety of different approaches to this problem.

Thanks to another feature of Python, sometimes called “monkey patching” you can take any object and replace an existing function with your own. This is obviously very powerful (and potentially dangerous) but it opens up all sorts of possibilities.

Here’s an example of monkey patching to replace the existing implementation of MyFunc with a lambda expression that simply counts how many times it was called.

def testMyFunc():
    obj = MyObject()
    calls = 0
    obj.MyFunc = lambda: calls += 1
    # DoSomething should call MyFunc
    DoSomething(obj)
    assert calls == 1

To take this one step further, we might wish to still call through to the original implementation of MyFunc. We can simply this by creating a helper class:

class MethodCallLogger(object):
    def __init__(self, meth):
        self.meth = meth
        self.CallCount = 0

    def __call__(self, *args):
        self.meth(*args)
        self.CallCount += 1

This class will call through to the original function, as well as count how many times it was called. The __call__ function is a way of allowing a class to be called as though it were a function. The *args syntax simply lets us support functions with multiple parameters. These could then be saved into a list and made available to the unit test if necessary. Here’s our first example again, using the MethodCallLogger class:

def testMyFunc():
    obj = MyObject()
    logger = MethodCallLogger(obj.MyFunc)
    obj.MyFunc = logger
    # DoSomething should call MyFunc
    DoSomething(obj)
    assert logger.CallCount == 1

Wednesday, 29 September 2010

Convert 16 bit PCM to IEEE float

NAudio has had the Wave32Stream for quite some time which converts a 16 bit PCM stream into a stereo IEEE floating point stream, with optional panning and volume. However, it could do with something simpler, that doesn’t automatically convert to stereo. So here is a preliminary implementation of an IWaveProvider that converts 16 bit PCM to IEEE float. It keeps the Volume property as that is always useful to have available. It keeps the code nice and clean by making use of the WaveBuffer class. I’ll probably add this to NAudio in the near future.

/// <summary>
/// Converts 16 bit PCM to IEEE float, optionally adjusting volume along the way
/// </summary>
public class Wave16toIeeeProvider : IWaveProvider
{
    private IWaveProvider sourceProvider;
    private readonly WaveFormat waveFormat;
    private volatile float volume;
    private byte[] sourceBuffer;

    /// <summary>
    /// Creates a new Wave16toIeeeProvider
    /// </summary>
    /// <param name="sourceStream">the source stream</param>
    /// <param name="volume">stream volume (1 is 0dB)</param>
    /// <param name="pan">pan control (-1 to 1)</param>
    public Wave16toIeeeProvider(IWaveProvider sourceProvider)
    {
        if (sourceProvider.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
            throw new ApplicationException("Only PCM supported");
        if (sourceProvider.WaveFormat.BitsPerSample != 16)
            throw new ApplicationException("Only 16 bit audio supported");

        waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(sourceProvider.WaveFormat.SampleRate, sourceProvider.WaveFormat.Channels);

        this.sourceProvider = sourceProvider;
        this.volume = 1.0f;
    }

    /// <summary>
    /// Helper function to avoid creating a new buffer every read
    /// </summary>
    byte[] GetSourceBuffer(int bytesRequired)
    {
        if (sourceBuffer == null || sourceBuffer.Length < bytesRequired)
        {
            sourceBuffer = new byte[bytesRequired];
        }
        return sourceBuffer;
    }

    /// <summary>
    /// Reads bytes from this wave stream
    /// </summary>
    /// <param name="destBuffer">The destination buffer</param>
    /// <param name="offset">Offset into the destination buffer</param>
    /// <param name="numBytes">Number of bytes read</param>
    /// <returns>Number of bytes read.</returns>
    public int Read(byte[] destBuffer, int offset, int numBytes)
    {
        int sourceBytesRequired = numBytes / 2;
        byte[] sourceBuffer = GetSourceBuffer(sourceBytesRequired);
        int sourceBytesRead = sourceProvider.Read(sourceBuffer, offset, sourceBytesRequired);
        WaveBuffer sourceWaveBuffer = new WaveBuffer(sourceBuffer);
        WaveBuffer destWaveBuffer = new WaveBuffer(destBuffer);

        int sourceSamples = sourceBytesRead / 2;
        int destOffset = offset / 4;
        for (int sample = 0; sample < sourceSamples; sample++)
        {
            destWaveBuffer.FloatBuffer[destOffset++] = (sourceWaveBuffer.ShortBuffer[sample] / 32768f) * volume;
        }

        return sourceSamples * 4;
    }

    /// <summary>
    /// <see cref="IWaveProvider.WaveFormat"/>
    /// </summary>
    public WaveFormat WaveFormat
    {
        get { return waveFormat; }
    }

    /// <summary>
    /// Volume of this channel. 1.0 = full scale
    /// </summary>
    public float Volume
    {
        get { return volume; }
        set { volume = value; }
    }
}

Tuesday, 28 September 2010

Countdown Kata in Python

One of my favourite programming exercises is solving the “countdown” numbers game. Basically, you are given a set of input numbers and have to solve the target by adding, multiplying, subtracting or dividing them (using each input number only once).

As before, this isn’t an ideal solution, as I’m still getting to grips with Python. It uses recursion to find the first solution. I don’t keep track of the closest answer yet.

import unittest

class SolverUnitTests(unittest.TestCase):
    testCases = ( 
        (0, [], True),
        (1, [], False),
        (1, [1], True),
        (2, [1], False),
        (2, [1,1], True),
        (2, [1,7], False),
        (3, [1,1,1], True),
        (1, [3,2], True), # subtract
        (1, [2,3], True),
        (6, [2,3], True), # multiply
        (7, [2,3], False), 
        (4, [8,2], True), # divide
        (4, [2,8], True), # divide
        (4, [9,2], False), 
        (14, [1,7,1], True), # add and multiply
        (18, [1,7,3], True), # subtract and multiply
        (100, [11, 1, 11, 1], True),
        (2010, [25, 4, 2, 10, 5, 2], True),
        (2011, [25, 4, 2, 10, 5, 2], True),
        (2012, [25, 4, 2, 10, 5, 2], True),
        (2013, [25, 4, 2, 10, 5, 2], True),
        (2014, [25, 4, 2, 10, 5, 2], True),
        (16, [2,2,2], False)
        )

    def testCanSolve(self):
        for (target, numbers, solveable) in self.testCases:
            print 'solving', target, 'with', numbers
            solver = Solver(target)
            self.assertEqual(solveable, solver.Solve(numbers))
        
def add(first,second):
    answer = first + second
    return (answer, '%d+%d=%d' % (first,second,answer))

def subtract(first,second):
    answer = first - second
    if answer < 0:
        answer = 0
    return (answer, '%d-%d=%d'  % (first,second,answer))

def multiply(first,second):
    answer = first * second
    if answer < 0:
        answer = 0
    return (answer, '%d*%d=%d'  % (first,second,answer))

def divide(first,second):
    if (second == 0) or ( first % second != 0):
        answer = 0
    else:
        answer = first / second
    return (answer, '%d/%d=%d'  % (first,second,answer))

def pairs(list):
    for i in range(len(list)):
        for j in range(i+1,len(list)):
            yield (list[i],list[j])

class Solver:
    def __init__(self, target):
        self.target = target
        self.operations = (add, subtract, multiply, divide)
        
    def Solve(self, numbers):
        if (self.target in numbers) or (self.target == 0):
            return True
        return self.SolveList(numbers, '')
    
    def SolveList(self, numbers, solutionSoFar):
        numbers.sort(reverse=True)
        for (first, second) in pairs(numbers):
            for func in self.operations:
                (newNumber,solution) = func(first,second)
                if newNumber == self.target:
                    print self.target, ':', solutionSoFar + ', ' + solution
                    return True
                elif newNumber:
                    newList = list(numbers)
                    newList.remove(first)
                    newList.remove(second)
                    newList.append(newNumber)
                    #print 'retry with', newList
                    if self.SolveList(newList, solutionSoFar + ', ' + solution):
                        return True
        return False

IronPython Codebreaker Katacast

As promised I recorded a quick katacast of myself using the IronPython continuous testing script I blogged about while I solve the codebreaker kata in Python. Don’t expect super fast coding or best practices – I’m still very much an IronPython newbie, but I have improved the solution slightly over my original offering. There were a few other refactorings I intended to make but I decided that 10 minutes was long enough.

I’m afraid I haven’t dubbed any classical music onto the recording (it would be incongruous to combine beautiful music with my ugly code). I used Expression Encoder 3 for the screen recording – for some reason Expression Encoder 4 doesn’t work on my computer (makes the recorded area go white making it completely impossible to do anything). You may notice ValueError come up on occasions after I save. I still don't know what causes this, but I simply save again and IronPython successfully reloads and runs the tests. Sadly it looks like the aspect ratio has somehow got squashed in the process of uploading to Vimeo, but it’s still readable.

Here’s the code:

import unittest

class MarkerTests(unittest.TestCase):
    def testNoMatch(self):
        marker = Marker('rgby')
        mark = marker.Mark('xxxx')
        self.assertEqual('', mark)

    def testOneImperfectMatch(self):
        marker = Marker('rgby')
        mark = marker.Mark('xrxx')
        self.assertEqual('m', mark)

    def testTwoImperfectMatches(self):
        marker = Marker('rgby')
        mark = marker.Mark('xrgx')
        self.assertEqual('mm', mark)

    def testImperfectMatchNotDoubleCounted(self):
        marker = Marker('rgby')
        mark = marker.Mark('xrrx')
        self.assertEqual('m', mark)

    def testOnePerfectMatch(self):
        marker = Marker('rgby')
        mark = marker.Mark('xgxx')
        self.assertEqual('p', mark)

    def testOnePerfectOneImperfectMatch(self):
        marker = Marker('rgby')
        mark = marker.Mark('xgxb')
        self.assertEqual('pm', mark)

    def testOnePerfectOnly(self):
        marker = Marker('rgby')
        mark = marker.Mark('rrrr')
        self.assertEqual('p', mark)

    def testAllPerfect(self):
        marker = Marker('rgby')
        mark = marker.Mark('rgby')
        self.assertEqual('pppp', mark)

        
class Marker:
    def __init__(self, answer):
        self.answer = answer
        
    def Mark(self, guess):
        perfectMatches = self.CountPerfectMatches(guess)
        anyPositionMatches = self.CountAnyPositionMatches(guess)
        return perfectMatches * 'p' + (anyPositionMatches - perfectMatches) * 'm'
        
    def CountPerfectMatches(self, guess):
        return sum([a == b for (a,b) in zip(guess, self.answer)])

    def CountAnyPositionMatches(self, guess):
        count = 0
        answerList = list(self.answer)
        for c in guess:
            if c in answerList:
                count += 1
                answerList.remove(c)
        return count

Friday, 24 September 2010

Autotest for IronPython

Continuing my explorations of IronPython, I decided I wanted a continuous test setup, which would automatically run my unit tests every time I saved a .py file, which was something I had seen on various “katacasts”. After a bit of investigation, I found the promising looking modipyd, which seemed to be Windows friendly. Unfortunately, github won’t let me download the files, so I set about creating my own basic continuous test tool for IronPython.

One advantage of using IronPython is that it immediately gives me access to the .NET framework’s FileSystemWatcher, which meant I didn’t have to worry about learning threading in Python. I did however have to work around one quirk which meant that the file changed event could get triggered multiple times, despite a single save command in my code editor.
Another challenge was working out how to load or reload a module given its name. This is done with the __import__ function, and using the sys.modules dictionary for the reload.

It took slightly longer than I hoped to get it fully working. Occasionally I get a spurious ValueError thrown when it attempts a reload. I’m not sure what that is all about. It should also be improved to rerun tests on all loaded modules not just the one that changed if you are not working entirely within a single file.
Again, any Python gurus feel free to suggest improvements.

import unittest
import clr
import sys
from System.IO import FileSystemWatcher
from System.IO import NotifyFilters
from System import DateTime

def changed(sender, args):
    global lastFileTimeWatcherEventRaised
    if DateTime.Now.Subtract(lastFileTimeWatcherEventRaised).TotalMilliseconds < 500:
        return
    moduleName = args.Name[:-3]
    if reloadModule(moduleName):
        runTests(moduleName)
    lastFileTimeWatcherEventRaised = DateTime.Now

def reloadModule(moduleName):
    loaded = False
    try:
        if(sys.modules.has_key(moduleName)):
            print 'Reloading ' + moduleName    
            reload(sys.modules[moduleName])
        else:
            print 'Importing ' + moduleName
            __import__(moduleName)
        loaded = True
    except SyntaxError, s:
        print 'Syntax error loading ' + s.filename, 'line', s.lineno, 'offset', s.offset
        print s.text
    except:
        #sometimes get a ValueError here, not sure why
        error = sys.exc_info()[0]
        print error
    return loaded

def runTests(moduleName):
    loader = unittest.TestLoader()
    suite = loader.loadTestsFromModule(sys.modules[moduleName])
    if suite.countTestCases() > 0:
        runner = unittest.TextTestRunner()
        runner.run(suite)
    else:
        print 'No tests in module'

def watch(path):
    watcher = FileSystemWatcher()
    watcher.Filter = '*.py'
    watcher.Changed += changed
    watcher.Path = path
    watcher.NotifyFilter = NotifyFilters.LastWrite
    watcher.EnableRaisingEvents = 1
    
lastFileTimeWatcherEventRaised = DateTime.Now

if __name__ == '__main__':
    print 'Watching current folder for changes...'
    watch('.')
    input('press Enter to exit')

If I get a chance I’ll record my own “katacast” showing the autotest python module in action.

Update: I've made the katacast. I've also made a slight improvement to the autotest code, moving the setting of lastFileTimeWatcherEventRaised further down to stop long-running tests thwarting the multiple-event filtering.

Thursday, 23 September 2010

Getting Started With IronPython

My first experience of Python was not a good one. I was working on a project to automate the testing of some telecoms equipment. This meant calling a lot of COM objects, which, back in 2003 at least, Python was not very good at. Also, the rudimentary Windows IDE available for Python at the time had a very annoying habit of mixing tabs and spaces, which meant that the indentation level you saw was not necessarily the indentation level you got. The other annoyance was regularly discovering syntax errors in my error reporting code, resulting in the reason for the failure of the overnight test run being lost forever.

But since Microsoft have never really offered a good scripting language for .NET, I decided to revisit Python in the form of IronPython. I’ve been slowly working my way through IronPython in Action, and trying to get back up to speed with the syntax (this online tutorial is very helpful).

As a simple way in, I decided to solve the “codebreaker” kata (basically the Mastermind game). Here are a few of the rudimentary issues I hit along the way.

First, get yourself a command prompt in the folder you are writing your .py file. The windows shortcut to the IronPython console will put you in the wrong place. If IronPython is not already in your path, enter:

set path=%PATH%;"c:\Program Files\IronPython 2.7\" 

This will allow you to type either ipy to get the IronPython console, or ipy filename.py to run your script directly.

Second, IronPython 2.7 Alpha 1 seems to have a bug calling import unittest. This means that you can’t make use of the built-in unit test support that Python has. I had to switch to normal Python to carry on (although I suspect IronPython 2.6 would have worked too).

Third, the unit test support in Python sadly doesn’t support the equivalent to NUnit’s [TestCase] attribute, meaning that parameterized unit tests aren’t supported (without writing some very clever code). There is a feature request filed against Python for this. For the time being I made use of a list of tuples to store my test data.

Fourth, there seems to be no find method for a list (although there is on string). You can use index but it will throw an exception if the item is not found.

In case you are interested in my (very sub-optimal) solution, the code follows. Without a doubt there are better ways to do this in Python. Please feel free to offer suggestions for improvement in the comments below.

import unittest

class CodeBreakerTest(unittest.TestCase):
    testcases = (
        ('xxxx',''),
        ('bxxx','m'),
        ('xbxx','m'),
        ('xxyx','m'),
        ('xxxb','m'),
        ('ybxx','mm'),
        ('xxrb','mm'),
        ('ybrx','mmm'),
        ('ybrg','mmmm'),
        ('bbxx','m'),
        ('rxxx','p'),
        ('xgxx','p'),
        ('xxbx','p'),
        ('xxxy','p'),
        ('rgxx','pp'),
        ('rgbx','ppp'),
        ('rgby','pppp'),
        ('rbxx','pm'),
        ('rgyx','ppm'),
        ('rbgy','ppmm') )
    
    def testAll(self):
        marker = Marker('rgby')
        for guess, answer in self.testcases:
            print 'Testing "' + guess + '", expecting "' + answer + '"'
            mark = marker.Mark(guess)
            self.assertEquals(answer, mark)
            
    def test2(self):
        marker = Marker('rggg')
        guess = 'rgyy'
        answer = 'pp'
        mark = marker.Mark(guess)
        self.assertEquals(answer, mark)
        
    def test3(self):
        marker = Marker('rgxx')
        guess = 'rggg'
        answer = 'pp'
        mark = marker.Mark(guess)
        self.assertEquals(answer, mark)

class Marker(object):
    def __init__(self, secret):
        self.secret = secret
        
    def Mark(self, guess):
        perfect = self.PerfectMatch(guess)
        wrongPos = self.WrongPositionMatch(guess)
        wrongPos = wrongPos[len(perfect):]
        return perfect + wrongPos
    
    def PerfectMatch(self, guess):
        answer = ''
        for i in range(len(guess)):
            if self.secret[i] == guess[i]:
                answer += 'p'
        return answer
    
    def WrongPositionMatch(self, guess):
        answer = ''
        secretList = [x for x in self.secret]
        for c in guess:
            index = self.Find(secretList,c)
            if index != -1:
                answer += 'm'
                secretList[index] = []
        return answer

    def Find(self, list, search):
        for i in range(len(list)):
            if (list[i] == search):
                return i
        return -1

if __name__ == '__main__':
    unittest.main()

Wednesday, 22 September 2010

Unit Testing Object Persistence

Most applications have the need to save data to disk in order to reload it later. Very often this simply means the use of a database, particularly when you are dealing with server side programming. But those of us doing predominantly client side development often need to save data to custom file formats, or perhaps XML, in order to reload it at a later date. One of the trickiest challenges that surrounds this is how to ensure that future versions of your application can still successfully load in data saved in earlier versions. There are two types of test you need to write, to be sure you have got your persistence code right.

Roundtrip Testing

The first type of test is to take an object (or object graph), save it to a temporary file, and reload it in. Then you need to assert that the exported object is identical to the imported one (an overridden Equals method can be helpful here).

It is important to make sure you cover every possible special case that can be exported, particularly every class that might need to be serialized at some point. Here’s a very simple example of a round-trip test:

string fileName = "test.tmp";
Widget exported = new Widget();
exported.Name = "xyz";
exported.Weight = 20;
WidgetExporter.Export(exported, fileName);
Widget imported = WidgetImporter(fileName);
Assert.AreEqual(exported.Name, imported.Name);
Assert.AreEqual(exported.Weight, imported.Weight);

Legacy Import Testing

There are lots of gotchas surrounding preserving the ability to import data from legacy versions of your application. These are particularly tricky if you use .NET’s built-in XML or binary serialisation. While they can cope with new fields or fields being removed, when properties change their type, or move from one class into a sub-class, it can break horribly.

So the second type of test needed is to import some real exported data. What is needed is a store of real exported data from every version of your software has ever been in the hands of a customer. If you can automate the creation of such data, all the better. Again, you need to ensure that your test data includes an example of every possible type of exported object.

Ideally, your unit tests would go through each file, import it, and meticulously check that all the properties of the imported object are set correctly. In practice, this can be too time consuming to write all the necessary assert statements.

Typically we just choose a few representative files to check thoroughly. But there is still value in importing everything else. Often, deserialization code will throw exceptions on errors, so simply successfully importing several hundreds of files even without checking their contents is a worthwhile test.

Future Proof Serialization

One last piece of advice. Choose file formats and deserialization code that are very flexible to change. There is nothing worse than not being able to change a class or object hierarchy because it will break serialization. Where possible use XML, as it is much easier to handle wholesale changes to schemas down the line.

Friday, 17 September 2010

Push Complexity to the Edges

I blogged a while back about “technical debt interest rates” where I argued that not all “technical debt” is created equal (and by technical debt, I am meaning code that is hard to maintain, e.g because it is overly complex or tightly coupled). Sometimes shortcuts make you pay in the long run, but other times they turn out to be a smart move after all. This raises the question of whether you can know how risky the technical debt you are introducing actually is.

Of course, without the ability to accurately predict the future, you can never know, but I want to propose a simple rule of thumb. The closer your compromised code is to the core of your codebase, the greater penalties it will incur.

Imagine an application whose architecture looks like this:

Clean Architecture

Here we have a fairly clean architecture where the core part of the application talks to three modules which are all isolated from each other. Suppose for a minute that Module A contains terribly complicated code because it was rushed out the door in a hurry. The technical debt it contains won’t actually cause us any pain at all if we need to extend Module B or Module C, or even add a new Module D. That is because it is isolated from the rest of the application.

However, consider a more realistic version of what happens when technical debt is introduced:

Complexity in the Core

Here, the code for feature A was not isolated into its own module, but is inextricably intertwined with the core code. Now we are in big trouble. Because although we may not want to make any changes to feature A, anyone who works on the core of our application has to deal with the added complexity that is in there.

If this seems obvious, that’s because it is. After all, the very compromise being made when introducing technical debt is often that we don’t have time to separate the new functionality out into its own isolated module. However, the time required to extract feature A out afterwards is much greater than doing it right first time, and becomes almost impossible after the same mistake has been made several times over.

The key is to recognise when you are introducing complexity into the core of your application. This is technical debt that will be very expensive. A plugin-in architecture, on the other hand, can allow you to have several isolated areas of complexity that may not require the debt to be paid back. This is why it makes sense to start new applications with a loosely coupled, extensible architecture, rather than deciding you will plumb one in at a later date.

Tuesday, 14 September 2010

Codebase Reformation

I am currently looking into how the architecture of a very large software product (now over 1 million lines of code) can be improved to make it more maintainable and extensible into the future. Inevitably on a project of its size, many of the assumptions and expectations held at the beginning have not proved to be correct. The product has been extended in ways that no one could have foreseen. Sometimes large amounts of technical debt has been introduced as we rushed to meet a vital deadline. New technologies have emerged (it was started as a .NET 1.1 product) which are more suited to the task than anything that was around back then. And additionally, those of us on the development team have grown as programmers during the time. What looked like good code to us back then now looks decidedly poor.

So my task over the last few months has been to produce a document of recommendations about what architectural changes should be made to improve things. I have a good idea of where we are now, and where we ought to be going. But there is one thing that concerns me, and it is summed up well in the following quote:

The reformer is always right about what is wrong. He is generally wrong about what is right. —G.K. Chesterton

In other words, it is one thing to look at a codebase and observe all the things that are wrong about it. It is another thing entirely to know what the correct solution is. Experience tells us that simply adopting a new technology (“let’s use IoC containers”, “let’s use WPF”) will typically solve one set of problems but introduce another set.

The correct approach in my view is to recognise that we are trying to move between two moving targets. In other words, “where we are” is always moving, since any living codebase is being continually worked on. But also, “where we want to be” is also a moving target, as we grow in our understanding of what the system needs to do, and what constitutes a well-architected system.

Bearing this in mind, it is a mistake therefore to imagine that you can, or should, attempt to “fix” the architecture of a large system in one gigantic refactoring session. There may be a case for making certain major changes to prepare the way for specific new features, and address significant areas of technical debt, but in my view, the best approach to codebase reformation is continual refactoring, allowing our vision of where we are heading to be modified as our horizons expand.