Sunday, 29 June 2014

How to create zip backups of previous git commits

I’m working on a new Pluralsight course at the moment, and one of the things I need to do is provide “before” and “after” code samples of all the demos I create. In theory should be easy since I’m using git for source control, so I can go back to any previous point in time using git checkout and then zip up my working folder. But the trouble with that approach is that my zip would also contain a whole load of temporary files and build artefacts that I don’t want. What I needed was to be able to quickly create a zip file containing only the files under source control for a specific revision.

I thought at first I’d need to write my own tool to do this, but I discovered that the built-in git archive command does exactly what I want. To create a zip file for a specific commit you just use the following command:

git archive --format zip --output example.zip 47636c1

Or if like me you are using tags to identify the commits you want to export, you can use the tag name instead of the hash:

git archive --format zip --output begin-demo.zip begin-demo

Or if you just want to export the latest files under source control, you can use a branch name instead:

git archive --format zip --output master.zip master

In fact, if you have Atlassian’s excellent free SourceTree utility installed, it’s even easier, since you can just right-click any commit and select “archive”. Anyway, this feature saved me a lot of time, so hope this proves useful to someone else as well.

Tuesday, 24 June 2014

How to Create a Breaking Non-Space in HTML

If you’ve done any web development, you’re probably familiar with the commonly used “non-breaking space”, entered in HTML as   The non-breaking space prevents a line break from occurring at that point, even though there is a space. You might use one to keep an icon together with the word it is associated with.

But recently I found myself wanting the opposite: a “breaking non-space” if you like. In other words, I wanted to mark certain points within a long word where I wanted a line-break to be allowed if necessary. This is useful for long camel-cased names of libraries such as my SharpMediaFoundation project. It’s too long to fit in the sidebar of my blog so I wanted it to be able to break after Sharp or Media.

It took a bit of searching, but eventually I found how to do this in HTML. It’s called the “Word Break Opportunity”, and you simply need to insert <wbr> at the points you are happy for a line-break to occur. So for my example I simply needed to enter Sharp<wbr>Media<wbr>Foundation. It’s not a feature you’ll need a lot, but occasionally it comes in handy.

Friday, 20 June 2014

How to Zip a Folder with ASP.NET Web Pages

Since I’m basing my new blog on MiniBlog, all my posts are stored as XML files in a posts folder. I wanted a simple way to create a backup of my posts, without resorting to FTP.

My solution is to create a URL that allows me to download the entire posts folder as a zip archive. To do the zipping, I used DotNetZip which is available as a nuget package and has a nice clean API.

Then in the Razor view (e.g. export.cshtml), the following code can be used to create a zip archive:

@using Ionic.Zip
@{
    Response.Clear();
    Response.BufferOutput = false; // for large files...
    System.Web.HttpContext c= System.Web.HttpContext.Current;
    string archiveName= String.Format("archive-{0}.zip", 
            DateTime.Now.ToString("yyyy-MMM-dd-HHmmss")); 
    Response.ContentType = "application/zip";
    Response.AddHeader("content-disposition", 
"filename=" + archiveName); var postsFolder = Server.MapPath("~/posts"); using (ZipFile zip = new ZipFile()) { zip.AddDirectory(postsFolder); zip.AddEntry("Readme.txt", String.Format("Archive created on {0}", DateTime.Now); zip.Save(Response.OutputStream); } Response.Close(); }

As you can see, it’s very straightforward, and I’ve also shown adding your own custom readme.txt from a string. If you’d rather add each file manually, just provide an enumeration of files to add to zip.AddFiles.

Finally, it would probably not be a great idea to let anyone call this, so you can protect it with a simple call to IsAuthenticated:

if (User.Identity.IsAuthenticated)

Thursday, 19 June 2014

Why Use an Event Aggregator?

All programs need to react to events. When X happens, do Y. Even the most trivial “Hello, World” application prints its output in response to the “program was started” event. And as our programs gain features, the number of events they need to respond to (such as button clicks, or incoming network messages) grows, meaning the strategy we use for handling events will have a big impact on the overall maintainability of our codebase.

In this post I want to compare four different ways you can respond to a simple event. For our example, the event is a “Create User” button being clicked in a Windows Forms application. And in response to that event our application needs to do the following things:

  1. Ensure that the entered user data is valid
  2. Save the new user into the database
  3. Send the user a welcome email
  4. Update the GUI to add the new user to a ListBox of users

Approach 1 – Imperative Code

This first approach is in one sense the simplest. When the event happens, just do whatever is needed. So in this example, right in the button click handler, we’d construct whatever objects we needed to perform those four tasks:

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    // get user
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    // validate user
    if (string.IsNullOrEmpty(user.Name) ||
        string.IsNullOrEmpty(user.Password) ||
        string.IsNullOrEmpty(user.Email))
    {
        MessageBox.Show("Invalid User");
        return;
    }
    
    // save user to database
    using (var db = new SqlConnection(@"Server=(localdb)\v11.0;Initial Catalog=EventAggregatorDemo;Integrated Security=true;"))
    {
        db.Open();
        using (var cmd = db.CreateCommand())
        {
            cmd.CommandText = "INSERT INTO Users (UserName, Password, Email) VALUES (@UserName, @Password, @Email)";
            cmd.Parameters.Add("UserName", SqlDbType.VarChar).Value = user.Name;
            cmd.Parameters.Add("Password", SqlDbType.VarChar).Value = user.Password;
            cmd.Parameters.Add("Email", SqlDbType.VarChar).Value = user.Email;
            cmd.ExecuteNonQuery();
        }
    
        // get the identity of the new user
        using (var cmd = db.CreateCommand())
        {
            cmd.CommandText = "SELECT @@IDENTITY";
            var identity =  cmd.ExecuteScalar();
            user.Id = Convert.ToInt32(identity);
        }
    }
    
    // send welcome email
    try
    {
        var fromAddress = new MailAddress(AppSettings.EmailSenderAddress, AppSettings.EmailSenderName);
        var toAddress = new MailAddress(user.Email, user.Name);
        const string subject = "Welcome";
        const string body = "Congratulations, your account is all set up";
    
        var smtp = new SmtpClient
        {
            Host = AppSettings.SmtpHost,
            Port = AppSettings.SmtpPort,
            EnableSsl = true,
            DeliveryMethod = SmtpDeliveryMethod.Network,
            UseDefaultCredentials = false,
            Credentials = new NetworkCredential(fromAddress.Address, AppSettings.EmailPassword)
        };
        using (var message = new MailMessage(fromAddress, toAddress)
        {
            Subject = subject,
            Body = body
        })
        {
            smtp.Send(message);
        }
    }
    catch (Exception emailException)
    {
        MessageBox.Show(String.Format("Failed to send email {0}", emailException.Message));
    }
    
    // update gui
    listBoxUsers.Items.Add(user.Name);
}

What’s wrong with this code? Well multiple things. Most notably it violates the Single Responsibility Principle. Here inside our GUI we have code to talk to the database, code to send emails, and code that knows about our business rules. This means it’s going to be almost impossible to unit test in its current form. It also means that we’ll likely end up with cut and paste code if later we discover that another part of our system needs to create new users, because that code will need to perform the same sequence of actions.

The reason we’re in this mess is that we have tightly coupled the code that publishes the event (in this case the GUI), to the code that handles that event.

So what can we do to fix this? Well, if you know the “SOLID” principles, you know its always a good idea to introduce some “Dependency Injection”. So let’s do that next…

Approach 2 – Dependency Injection

What we could do here is create a few classes each with a single responsibility. A UserValidator validates the entered data, a UserRepository saves users to the database, and an EmailService sends the welcome email. And we give each one an interface, allowing us to mock them for our unit tests.

Suddenly, our create user button click event handler has got a whole lot more simple:

public NewUserForm(IUserValidator userValidator, IUserRepository userRepository, IEmailService emailService)
{
    this.userValidator = userValidator;
    this.userRepository = userRepository;
    this.emailService = emailService;
    InitializeComponent();
}

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    // get user
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    // validate user
    if (!userValidator.Validate(user)) return;

    // save user to database
    userRepository.AddUser(user);

    // send welcome email
    const string subject = "Welcome";
    const string body = "Congratulations, your account is all set up";
    emailService.Email(user, subject, body);

    // update gui
    listBoxUsers.Items.Add(user.Name);
}

 

So we can see we’ve improved things a lot, but there are still some issues with this style of code. First of all, we’ve probably not taken DI far enough. Our GUI still has quite a lot of knowledge about the workflow of handling this event – we need to validate, then save, then send email. And so probably we’d find ourselves wanting to create another class just to orchestrate these three steps.

Another related problem is the construction of objects. Inversion of Control containers can help you out quite a bit here, but it’s not uncommon to find yourself having classes with dependencies on dozens of interfaces, just because you need to pass them on to child objects that get created later. Even in this simple example we’ve got three dependencies in our constructor.

But does the GUI really need to know anything whatsoever about how this event should be handled? What if it simply publishes a CreateUserRequest event and lets someone else handle it?

Approach 3 – Raising Events

So the third approach is to take advantage of .NET’s built-in events, and simply pass on the message that the CreateUser button has been clicked:

public event EventHandler<UserEventArgs> NewUserRequested;

protected virtual void OnNewUserRequested(UserEventArgs e)
{
    var handler = NewUserRequested;
    if (handler != null) handler(this, e);
}

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    // send an event
    OnNewUserRequested(new UserEventArgs(user));
}

public void OnNewUserCreated(User newUser)
{
    // now a user has been created, we can update the GUI
    listBoxUsers.Items.Add(newUser.Name);
}

What’s going on here is that the button click handler now does nothing except gather up what was entered on the screen and raise an event. It’s now completely up to whoever subscribes to that event to deal with it (performing our tasks of Validate, Save to Database and Send Email), and then they need to call us back on our “OnNewUserCreated” method so we can update the GUI.

This approach is very nice in terms of simplifying the GUI code. But one issue that you can run into is similar to the one faced with the dependency injection approach. You can easily find yourself handling an event only to pass it on to the thing that really needs to handle it. I’ve seen applications where an event is passed up through 7 or 8 nested GUI controls before it reaches the class that knows how to handle it. Can we avoid this? Enter the event aggregator…

Approach 4 – The Event Aggregator

The event aggregator completely decouples the code that raises the event from the code that handles it. The event publisher doesn’t know or care who is interested, or how many subscribers there are. And the event subscriber doesn’t need to know who is responsible for publishing it. All that is needed is that both the publisher and subscriber can talk to the event aggregator. And it may be acceptable to you to use a Singleton in this case, although you can inject it if you prefer.

So in our next code sample, we see that the GUI component now just publishes one event to the event aggregator (a NewUserRequested event) when the button is clicked, and subscribes to the NewUserCreated event in order to perform its GUI update. It needs no knowledge of who is listening to NewUserRequested or who is publishing NewUserCreated.

public CreateUserForm()
{
    InitializeComponent();
    EventPublisher.Instance.Subscribe<NewUserCreated>
        (n => listBoxUsers.Items.Add(n.User.Name));
}

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    // get user
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    EventPublisher.Instance.Publish(new NewUserRequested(user));
}

As you can see, this approach leaves us with trivially simple code in our GUI class. The subscribers too are simplified since they don’t need to be wired up directly to the class publishing the event.

Benefits of the Event Aggregator Approach

There are many benefits to this approach beyond the decoupling of publishers from subscribers. It is conceptually very simple and easy for developers to get up to speed on. You can introduce new types of messages easily without making changes to public interfaces. It’s very unit testing friendly. It also discourages chatty interactions with dependencies and encourages a more asynchronous way of working – send a message with enough information for the handlers to deal with it, and then wait for the response, which is simply another message on the event bus.

There are several upgrades to a vanilla event aggregator that you can create to make it even more powerful. For example, you can give subscribers the capability to specify what thread they want to handle a message on (e.g. GUI thread or background thread). Or you can use WeakReferences to reduce memory leaks when subscribers forget to unsubscribe. Or you can put global exception handling around each callback to a subscriber so you can guarantee that when you publish a message every subscriber will get a chance to handle it, and the publisher will be able to continue.

There are many situations in which the event aggregator really shines. For example, imagine you need an audit trail tracking many different events in your application. You can create a single Auditor class that simply needs to subscribe to all the messages of interest that come through the event aggregator. This helps keep cross-cutting concerns in one place.

Another great example is a single event that can be fired from multiple different places in the code, such as when the user requests help within a GUI application. We simply need to publish a HelpRequested message to the aggregator with contextual information of what screen they were on, and a single subscriber can ensure that the correct help page is launched.

Where Can I Get An Event Aggregator?

Curiously, despite the extreme usefulness of this pattern, there doesn’t seem to be an event aggregator implementation that has emerged as a “winner” in the .NET community. Perhaps this is because it is so easy to write your own. And perhaps also because it really depends what you are using it for as to what extensions and upgrades you want to apply. Here’s a few to look at to get you started:

  • Udi Dahan’s Domain Event pattern
    • Uses an IDomainEvent marker interface on all messages
    • Integrates with the container to find all handlers of a given event
    • Has a slightly odd approach to threading (pubs and subs always on the same thread)
  • José Romaniello’s Event Aggregator using Reactive Extensions
    • A very elegant and succinct implementation using the power of Rx
    • Subscriptions are Disposable
    • Use the power of Rx to filter out just events you are interested in, and handle on whatever thread you want
  • Laurent Bugnion’s Messenger (part of MVVM Light)
    • Uses weak references to prevent memory leaks
    • Can pass “tokens” for context
    • Includes passing in object as the subscriber, allowing you to unsubscribe from many events at once
    • Allows you to subscribe to a base type and get messages of derived types
  • PRISM Event Aggregator (from Microsoft patterns and practices)
    • A slightly different approach to the interface – events inherit from EventBase, which has Publish, Subscribe and Unsubscribe methods.
    • Supports event filtering, subscribing on UI thread, and optional strong references (weak by default)

I’ve made a few myself which I may share at some point too. Let me know in the comments what event aggregator you’re using.

Tuesday, 17 June 2014

New Blog–markheath.net

I’m pleased to be able to announce the launch of my new blog today. It’s up at markheath.net and hosted on .NET. The plan is to migrate away from my old blog on blogger, but probably for a while I’ll double post to both locations. Also, I’ve imported all my old posts from blogger, so everything is available on the new site.

If you want to subscribe, you can use my new feedburner URL: feeds.feedburner.com/soundcode. I’ll also be pointing my old feedburner feed for the blogger blog to point to markheath.net shortly, so if you subscribed using that, you should just get my new material.

Why MiniBlog?

After writing recently about my dilemma for what blogging platform to choose, I eventually opted to base my site on Mads Kristensen’s MiniBlog. This wasn’t the option that most people recommended to me, but it had several things in its favour.

First, it’s built with technologies I want to improve in. This is what set it apart from options like Wordpress (PHP and MySQL), Ghost (NodeJS and Postgres), or Jeckyl (Ruby). Though it would be cool to get some skills in those technologies at some point in the future, I’ve realised that life’s too short to become an expert in everything. So I’d rather the time I spend customising and troubleshooting my new blog contributes to me learning something closer to the top of my priority list.

Second, it’s small enough to get my head around. I do know a fair bit of ASP.NET but I don’t make websites as part of my day job, so it can sometimes be frustratingly slow to make the modifications I want to. MiniBlog does a lot with a very small amount of code, and it’s allowed me to quickly learn how it all works, and add in several customisations of my own (such as inserting an link to my NAudio Pluralsight course at the end of posts about NAudio).

Third, it uses XML files for posts. This made it very straightforward for me to create a tool migrate all my blogger posts into the MiniBlog format and check it all worked locally before pushing to Azure. I may move to a database at some point, but I also want to experiment with storing posts in MarkDown as well.

Fourth, it supports Windows Live Writer. WLW is a great blogging tool, but with a few rough edges. I’m really hoping Scott Hanselman succeeds in his quest to get it open sourced. It would be great to see features such as syncing drafts and custom dictionaries to the cloud, blogging in MarkDown, greater control over image and source code markup etc.

Why Disqus?

MiniBlog actually has fairly good built in comment support, and I initially indended to use that for comments since it allows me to be in full control of all my data. But the more I thought about it, the more Disqus makes sense. They get to solve the spam problem, and they get to handle notifying discussion participants of new messages. They also seem to be good at letting you export your comments, so the data still belongs to me. I had upgraded my Blogger blog comments to use Google+ a few months back and that is a big mistake. Now I don’t get notified when anyone comments on a post, and those comments don’t appear in the exported blog XML file, unlike the ones with the old method.

What’s Next?

I’ve still got a few features to add to the new site before it is on a feature par with the Blogger one. I want to enable searching the archives, and seeing the full categories list. I’ve also still got to add code syntax highlighting back in, although actually I think the default bootstrap styling of code isn’t too bad. Let me know if you encounter any issues with my new site.

Friday, 6 June 2014

Running Windows Forms on Linux with Mono

Although WinForms may be “dead”, it does have one trick up its sleeve that WPF doesn’t, and that is you can run WinForms apps on mono. Here’s a simple guide to running a Windows Forms application on Ubuntu

Step 1 - Install Mono

Open a terminal window, and make sure everything is up to date with the following commands:

sudo apt-get update
sudo apt-get upgrade

Now you can install mono with the following command:

sudo apt-get install mono-complete

Step 2 - Create an Application

Now we need to create our C# source file. You can use any text editor you like, but if like me you aren’t familiar with Linux text editors like vi or emacs, gedit is a simple notepad-like application which is easy to use. Launch it with the following command: (the ampersand at the end tells the terminal not to wait for gedit to close before letting us continue)

gedit wf.cs &

Now let’s create a very simple application:

using System;
using System.Windows.Forms;

public class Program
{
    [STAThread]
    public static void Main()
    {
        var f = new Form();
        f.Text = "Hello World";
        Application.Run(f);
    }
}

Step 3 - Compile and Run

Now we’re ready to compile. The C# compiler in mono is gmcs. We’ll need to tell it we’re referencing the Windows Forms DLL:

gmcs wf.cs –r:System.Windows.Forms.dll

To run the application, simply call mono, passing in the executable:

mono wf.exe

And that’s all there is to it! We have a WinForms app running on Linux.

image

Although mono doesn’t support everything in WinForms, you can use most standard controls, so you can easily add further UI elements:

image

Taking it Further

Obviously writing applications by hand like this is a bit cumbersome, but there is an IDE you can use for Linux called monodevelop. You install it like this:

sudo-apt-get install monodevelop

This then gives you a nice editing environment, allowing you to debug, and manage project references (you’d usually add System.Windows.Forms and System.Drawing). Unfortunately it doesn’t offer a WinForms designer – for desktop apps it prefers you to use GTK#. Nevertheless, it’s a nice free IDE allowing you to experiment with getting your existing Windows Forms applications working cross-platform on Linux. (It seems this will also work on OS X with mono installed but I don’t have a Mac so I haven’t tried it out)

image

Thursday, 5 June 2014

Is Windows Forms Dead Yet?

Everyone knows WinForms is dead, right? It’s been dead ever since WPF was launched eight years ago, and now everyone has completely abandoned WinForms and is writing everything in XAML using MVVM.

Except they’re not. For starters, there are still a huge number of existing legacy applications that are built using WinForms, many of which are hundreds of thousands of lines of code in size. The amount of time required to port them to WPF would be prohibitively large. And there might not even be much of a point anyway. A lot of business apps don’t have a pressing need for the fancy animation capabilities that WPF offers. And while MVVM may make the developers happy, the customers don’t really care what UI design pattern you used so long as the thing works.

Even more surprisingly perhaps, I’m still seeing a large number of companies creating brand new applications in WinForms. Why? Well it’s a programming model they know and understand, does everything they need, and runs everywhere they need it to. So why should they invest the time learning XAML and dependency properties and attached behaviours, if what they already know can get the job done?

So is WinForms dead or not? Well it depends…

Of course it’s dead!

It’s dead in the sense that it is getting no more new features. It is however still picking up bugfixes. Even as recently as .NET 4.5.2, Microsoft made some improvements to WinForms. But don’t expect any new controls or improvements to the programming model (no strongly typed collections coming soon).

It’s also dead in the sense that certain new types of application can’t be built with WinForms. Most notably you can’t create a Windows Store application using WinForms. You also can’t use it on the Windows Phone or the XBox. Of course, for many line of business applications, these platforms were never being targetted in the first place.

No, it’s still alive (just)!

So why are people still using WinForms? The answer is simple: return on investment. Many companies and developers have invested heavily into Windows Forms over the years, and want to maximise their return on that investment. This investment includes…

  • Time invested learning the WinForms programming model
  • Time invested developing existing WinForms applications and controls
  • Money invested in buying suites of custom WinForms controls

So WinForms stubbornly persists, despite having been superseded by WPF (and possibly both will be eclipsed by HTML 5 in the not too distant future). And despite all its frustrating flaws and limitations, I believe it is still possible to create good quality Windows desktop applications using WinForms.

In fact, I’m working on a new course for Pluralsight which will highlight some of what I consider to be best practices for Windows Forms development. I hope to provide guidance for all developers who are still building stuff with it, whether by choice, or through gritted teeth. And I hope to show in the course how you might write your code in such a way that migration to a newer UI framework isn’t impossible. So watch this space if you’re still a WinForms developer, and do feel free to put in any requests for topics you think should be covered in the course.

Wednesday, 4 June 2014

My Headphone Recommendations

 

WP_000281For years my main headphones for mixing and monitoring have been AKG K270s, but a year ago they suffered an untimely death at the hands of my two year old and a pair of scissors. So I decided it was about time I got myself a few decent pairs of headphones, as I wanted a pair for my audio interface, a pair for my stage piano, and a pair to use at work.

AKG K271

The obvious choice for me would be the successor to the K270s, the AKG K271 MKII, which doubtless is a great pair of headphones as well, but I wanted to try out a few different makes. One advantage of the K271 which I will mention, is that it has a clever switch built into the headband that mutes the sound when you take them off. This is actually quite handy at work when someone comes to talk to you - you can just remove your phones without needing to pause your music. These phones can usually be bought for around £100 these days, which is a bit of a price drop from their original price.

They also have very good sound isolation, making them a great choice for monitoring while recording.

KRK KNS 8400

The first pair I got to replace my K270s to use for mixing and monitoring were the KRK KNS 8400. KRK have only recently entered the headphone market, but they have a great reputation for speakers, so I was willing to take a risk. I was very pleased I did. These headphones are very lightweight and supremely comfortable to wear, as well as sounding great, with surprisingly good bass response. They come with a nice carry pouch, and you can rotate the earpieces easily to listen to one ear without putting on the phones. They also have a detachable cable, which means I won't have to buy a new pair if another unfortunate scissor incident were to occur. These phones are usually available at around £130, making them a little more expensive than some of the other options, although I got mine as part of a bundle including a soundcard and microphone which reduced the price a bit.

Audio Technica ATH-M50

The pair I went for to use with my stage piano, was the Audio Technica ATH M50. These have an extremely good reputation, and with the updated ATH-M50X being launched this year, the old model were available with a good discount (around £80 instead of the usual £130). I have to say, these really deserve their reputation.

They're heavier than the KNS 8400s, but just as comfortable, and slightly louder. If anything I prefer their sound slightly to the KNS 8400s. They also have a carry pouch, and an ingenious way of collapsing smaller for transit. Sadly the M50 doesn't have a replaceable cable (although the newer M50X does), and its cable is coiled. I thought this might be annoying at first, but actually I find this helps it from getting tangled up with all the other wires.

Overall I’m really pleased with these phones, and probably would say they are the nicest pair I’ve owned.

 

TASCAM TH-02


For my third pair, which was going to be used at work, I decided to take a bit of a gamble. The TASCAM TH-02 headphones are outrageously cheap, and yet seem to have a cult following of enthusiastic supporters who claim their sound rivals that of much more expensive headphones. I decided that for just £20, I could hardly go wrong, so ordered a pair.

They turned out to be a bit of a disappointment. The phones are not particularly comfortable to wear, with the ear pads not sitting nicely over my ears like the ones on the ATH-M50, KNS8400 or K270. I suppose at this price, lower build quality was to be expected. I find they leave me with an uncomfortably sweaty head after wearing them for an hour or so.

As for sound quality, I was also underwhelmed. I'm not sure what the fuss was about, as when playing my stage piano through them (one of my favourite ways to test out the full tonal range of a pair of headphones), I got some harsh midrange that wasn't present on the ATH-M50s. Don't get me wrong, these are probably superior to the vast majority of £20 headphones, but they don't compare to the phones in the £100 range.

So I've given up on the TH-02, and decided to grab another pair of ATH-M50s while they are still available at cut price. It seems like many shops now only have the white ones left so stocks may be running out. I'd definitely recommend grabbing a pair while you can if you are in the market for a quality pair of headphones for a cut-down price. There are a few other makes that I've yet to try out. The Sony MDR-7506 and Sennheiser HD 280 Pro both have great reputations too, so maybe next time I’m in the market I’ll try out one of those, but I have enough pairs for now. Let me know in the comments which headphones you recommend.