Friday, 19 December 2008

Soundwave XAML

I visited the NoiseTrade website this week and they had a neat looking soundwave icon they are using throughout the site. I doubt it is original to them, but I  it works nicely and I thought I would have a go at creating it in XAML.

Its very simple to do. You create a series of concentric circles and then mask off the bit you don’t want with a Polygon.

Soundwave in XAML

Here’s the XAML:

<Grid Width="100" Height="100">  
    <Ellipse Width="10" Height="10" Fill="Orange" />
    <Ellipse Width="30" Height="30" Stroke="Orange" StrokeThickness="5"  />
    <Ellipse Width="50" Height="50" Stroke="Orange" StrokeThickness="5"  />
    <Ellipse Width="70" Height="70" Stroke="Orange" StrokeThickness="5"  />
    <Polygon Stretch="Fill" Fill="White" Points="0,0, 1,1 -1,1 -1,-1 1,-1" />
  </Grid>

The only really clever thing going on here is that we use the Stretch property of the Polygon to centre our cut-out shape over the centre point of the circles. The Stretch property also means we can use nominal coordinates, making it easier to get the coordinates right. Alternatively we could have used the following:

<Polygon Fill="Red" Points="50,50, 100,100 0,100 0,0 100,0" />

The outer Grid must be sized to be a square so that the angles are 45 degrees when using the Stretch=Fill method. Here’s the whole thing with the cut-out in red to make it more obvious what is going on:

XAML Soundwave with clipping polygon

I then tried using Expression Blend to subtract the Polygon from the each of the Ellipses in turn, but it struggled to do what I wanted as it closed the new arcs giving the following effect (not sure why the right-hand sides look clipped):

Soundwave closed path

However, I was able to fix this by first going in and removing the final “z” from each newly created Path, which meant that the shapes were no longer closed. Then using the node editor tool, delete the extra node from each path. This gets us almost there, with the exception that Expression Blend has let the central circle drift slightly out of position:

XAML Soundwave in Blend

The resulting XAML isn’t quite so clean as our initial attempt, but it has the advantage of not relying on a white shape covering things up, meaning that it can be placed on top of any background image.

<Grid Width="100" Height="100">
  <Path HorizontalAlignment="Right" Margin="0,46.464,45,46.464" Width="5" Fill="Orange" Stretch="Fill" Data="M3.535533,0 C4.4403558,0.90481973 5,2.1548209 5,3.535533 C5,4.916245 4.4403558,6.1662469 3.535533,7.0710659 L0,3.535533 z"/>
  <Path Margin="0,38.661,34.5,38.339" Stretch="Fill" Stroke="Orange" StrokeThickness="5" Data="M11.338835,2.5 C13.600883,4.7620502 15,7.8870525 15,11.338835 C15,14.790609 13.600889,17.915615 11.338835,20.17767" Width="8.808" HorizontalAlignment="Right"/>
  <Path Margin="0,31.59,24.5,31.41" Stretch="Fill" Stroke="Orange" StrokeThickness="5" Data="M18.409901,2.5 C22.481602,6.5716963 25,12.19669 25,18.409901 C25,24.623108 22.481606,30.248114 18.409901,34.319801" Width="11.737" HorizontalAlignment="Right"/>
  <Path Margin="0,24.519,14.5,24.481" Stretch="Fill" Stroke="Orange" StrokeThickness="5" Data="M25.480968,2.5 C31.362314,8.3813419 35,16.506346 35,25.48097 C35,34.455612 31.362314,42.580608 25.480968,48.461937" Width="14.665" HorizontalAlignment="Right"/>
</Grid>

Interestingly, rendering this in Kaxaml, the inner segment now appears in the right place:

XAML Soundwave in Kaxaml

Thursday, 18 December 2008

When to Review Code

I am assuming I can take it for granted that most developers acknowledge the value of performing code reviews. All the companies I have worked for have had some kind of policy that states a code reviews must take place, but when the code review takes place can vary dramatically. While I was writing this post, I came across a couple of helpful Stack Overflow questions which have more insights on the when a code review should take place. (When to review code - before or after checkin to main and When is the most effective time to do code reviews)

I'll discuss three possible times to perform a code review:

Post Check-in

I think this is the most common time for code reviews to take place. The developer checks in their code, then arranges for it to be reviewed.

Advantages:
  • The developer has actually "finished" coding, meaning that issues found really are issues as opposed to "oh yes I was going to do that later".
  • The developer is not held up waiting around for a reviewer - they can get their code checked in and start work on the next task.
  • Source Control systems typically have tools which make it very easy to highlight just the changes that have been made in a particular check-in. This is very convenient for reviewing bug fixes.
Disadvantages:
  • Unreviewed code can get into into main branch, increasing the chances of an inadvertent show-stopper bug.
  • There is potential for the review to be put on the back burner and forgotten as there is nothing being held up by it.
  • Management may view this feature as "complete" and thus resist allocating further time on it if the review recommends some refactoring.
  • The checked in code may have already gone through a manual testing cycle which is a strong disincentive to touch the code even to implement minor enhancement suggestions (as they invalidate the tests).
  • The developer is likely working on a new task when the code review takes place, meaning that it is no longer fresh in their mind, and (depending on the way source control is used) can easily end up mixing code review enhancements with new code for an unrelated feature in a single check-in.
It is interesting how the use of branches in source control can lessen the impact of many of the points I have made here (both advantages and disadvantages).

Pre-Checkin

One company I worked for had a policy of pre-checkin code reviews. For small changes, another developer came and looked over your shoulder while you talked them through the changes, and for larger features, you emailed the code across and awaited the response.

Advantages

  • The code is still fresh in the developer's mind as they have not yet moved on.
  • It is clear to the manager that we are not yet "finished" until the code review, resulting code modifications and re-testing have taken place.
  • The fact that the code is still checked out means that it is very easy to make changes or add TODOs right in the code during the review, which results in less suggestions getting forgotten.
  • The code can be checked in with a comment detailing who reviewed it - share the blame with your reviewer if it contains a show-stopper bug!
Disadvantages
  • Though the code is "ready" for check-in, it may need to be merged again before the actual check-in, which can sometimes result in significant changes.
  • It can make it harder for the reviewer to view the changes offline.
  • A strict no check-in without code review policy can result in people making fewer check-ins, and batching too much up into one check-in.
  • Can hold up the developer from starting new work if the reviewer is not available.

Got it working

There is one other potential time for a code review. In Clean Code, Bob Martin talks about the "separation of concerns" developers tend to employ while coding. That is, first we "make it work", and then we "make it clean". We have got the main task complete, but things like error handling, comments, removing code smells etc have yet to take place. What if a code review took place before the "make it clean" phase?

Advantages
  • The developer does not need to feel so defensive about their code, as they have not completely finished.
  • Its not too late to make architectural changes, and maybe even refactor other areas of the codebase to allow the new code to "fit" better.
  • Because there is still time allocated to the task, it can effectively serve as a second design review, with an opportunity to make some changes to the design with the benefit of hindsight.
  • Could be done as a paired programming exercise, experimenting with cleaning up the code structure and writing some unit tests.
Disadvantages
  • Wasting time looking at issues that were going to be fixed anyway.
  • May not be appropriate for small changes, and with large pieces of development, may depend heavily on whether the coder follows many small iterations of "make it work, make it clean", or a few large iterations.

Final thoughts

There are two factors that greatly affect the choice of when to code review. The first is your use of Source Control. If individual developers and development teams are able to check work into sub-branches, before merging into the main branch, then many of the disadvantages associated with a post-checkin code review go away. In fact, you can get the best of both worlds by code reviewing after checking in to the feature branch, but before merging into the main branch.

Second is the scope of the change. A code review of a minor feature or bugfix can happen late on in the development process as the issues found are not likely to involve major rearchitecture. But if you are developing a whole new major component of a system, failing to review all the code until the developer has "finished" invariably results in a list of suggestions that simply cannot be implemented due to time constraints.

Thursday, 11 December 2008

List Filtering in WPF with M-V-VM

The first Windows Forms application I ever wrote was an anagram generator back in 2002. As part of my ongoing efforts to learn WPF and M-V-VM, I have been porting it to WPF, and adding a few new features along the way. Today, I wanted to add a TextBox that would allow you filter the anagram results to only show results that contained a specific sub-string.

The first task is to create a TextBox and a ListBox in XAML and set their binding properties. We want the filter to update every time a user types a character in, so we use the UpdateSourceTrigger to specify this.

<TextBox Margin="5" 
     Text="{Binding Path=Filter, UpdateSourceTrigger=PropertyChanged}" 
     Width="150" />
...
<ListBox Margin="5" ItemsSource="{Binding Phrases}" />

>Now we need to create the corresponding Filter and Phrases properties in our ViewModel. Then we need to get an ICollectionView based on our ObservableCollection of phrases. Once we have done this, we can attach a delegate to it that will perform our filtering. The final step is to call Refresh on the view whenever the user changes the filter box.

private ICollectionView phrasesView;
private string filter;

public ObservableCollection<string> Phrases { get; private set; }
                    
public AnagramViewModel()
{
   ...
   Phrases = new ObservableCollection<string>();
   phrasesView = CollectionViewSource.GetDefaultView(Phrases);
   phrasesView.Filter = o => String.IsNullOrEmpty(Filter) ? true : ((string)o).Contains(Filter); 
}

public string Filter 
{
   get
   {
      return filter;
   }
   set
   {
      if (value != filter)
      {
         filter = value;
         phrasesView.Refresh();
         RaisePropertyChanged("Filter");
      }
   }
}

And that's all there is to it. It can be kept nicely encapsulated within the ViewModel without the need for any code-behind in the view.

List filtering in WPF

Wednesday, 10 December 2008

Measuring TDD Effectiveness

There is a new video on Channel 9 about the findings of an experimental study of TDD. They compared some projects at Microsoft and IBM with and without TDD, and published a paper with their findings (which can be downloaded here).

The headline results were that the TDD teams had a 40-90% lower pre-release "defect density", at the cost of a 15-35% increase in development time. To put those figures into perspective, imagine a development team working for 100 days and producing a product with 100 defects. What kind of improvement might they experience if they used TDD?

  Days Defects
Non TDD 100 100
TDD worst case 115 60
TDD best case 135 10

Here's a few observations I have on this experiment and the result.

Limitations of measuring Time + Defects

From the table above it seems apparent that a non-TDD team could use the extra time afforded by their quick and dirty approach to development to do some defect fixing and would therefore able to comfortably match the defect count of a TDD team given the same amount of time. From a project manager perspective, if their only measures of success are in terms of time and defect count, then they will not likely perceive TDD as being a great success.

Limitations of measuring Defect Density

The test actually measures "defect density" (i.e. defects per KLOC) rather than simply "defects". This is obviously to give a normalised defect number as the comparable projects were not exactly the same size, but are a couple of implications about this metric.

First, if you write succinct code, your defect density is higher than verbose code with the same number of defects. I wonder whether this might negatively penalise TDD which ought to produce fewer LOC (due to not writing more than is needed to pass the tests). Second, the number of defects found pre-release often says more about the testing effort put in than the quality of the code. If we freeze the code and spend a month testing, the bug count goes up, and therefore so does the defect density, but the quality of the code has not changed one bit. The TDD team may well have failing tests that reveal more limitations of their system than a non-TDD team would have time to discover.

In short, defect density favours the non-TDD team so the actual benefits may be even more impressive than those shown by the study.

TDD results in better quality code

The lower defect density results of this study vindicate the idea that TDD will produce better quality code. I thought it was disappointing that the experiment didn't report any code quality metrics, such as cyclomatic complexity, average class lengths, class coupling etc. I think this may have revealed an even bigger difference between the two approaches. As it is the study seems to assume that quality is simply proportional to defect density, which in my view is a mistake. You can have a horrendous code-base that through sheer brute force has had most of its defects fixed, but that does not mean it is high quality.

TDD really needs management buy-in

The fact that TDD takes longer means that you really need management buy-in to introduce TDD in your workplace. Taking 35% longer does not go down well with most managers, unless they really believe it will result in significant quality gains. I should add also, that to do TDD effectively, you need a fast computer, which also requires management buy-in. TDD is based on minute by minute cycles of writing a failing unit test, and writing code to fix it. There are two compile steps in that cycle. If your compile takes several minutes then TDD is not a practical way of working.

TDD is a long-term win

Two of the biggest benefits of TDD relate to what happens next time you revisit your code-base to add more features. First is the comprehensive suite of unit tests that allow you to refactor with confidence. Second is the fact that applications written with TDD tend to be much more loosely coupled, and thus easier to extend and modify. This is another thing the study failed to address at all (perhaps a follow-up could measure this). Even if for the first release of your software the TDD approach was worse than non-TDD, it would still pay for itself many times over when you came round for the second drop.

Tuesday, 9 December 2008

Some thoughts on assemblies versus namespaces

I have noticed a few debates over on StackOverflow and various blogs recently on whether components in a large .NET application should each reside in their own assembly, or whether there should be fewer assemblies and namespaces used instead to separate components (for example, see this article).

Much of the debate revolves around refuting the idea that simply by breaking every component out into its own assembly you have necessarily achieved separation of concerns (e.g. Separate Assemblies != Loose Coupling).

This may be the case, but why not have lots of assemblies? There are two main reasons to favour fewer assemblies:

  • Speed - Visual Studio simply does not cope well with large numbers of projects. If you want to maintain a short cycle of code a little, compile, test a little, then you want your compile time to be as quick as possible.
  • Simplified Deployment - its easier to deploy applications containing a half a dozen files than those with 100s.

Both are valid concerns (and the speed issue is causing real problems on the project I am working on), but I want to revisit the two main reasons why I think that separate assemblies can still bring benefits to a large project.

Separate assemblies enforce dependency rules. This is the controversial one. Many people have said that NDepend can do this for you, which is true, but not every company has NDepend (or at least not on every developer's PC). It is a shame there is not a simpler light-weight tool available for this task.

While I agree that separate assemblies does not automatically mean loose coupling, I have found it is a great way to help inexperienced developers to put code in the right place (or at least make it very difficult to put it in the wrong place). Yes, in an ideal world, there would be training and code reviews (and NDepend) to protect against the design being compromised. And it seems likely that genius developers such as  Jeremy Miller are in a position to control these factors in their workplace. But many of us are not.

And in a large project team, separating concerns such as business logic and GUI into separate assemblies does make a very real and noticeable difference in how often coupling is inadvertently introduced between the layers, and thus how testable the application is.

Separate assemblies enable component reuse. What happens when someone creates a new application that needs to reuse a component from an existing one? Well you simply take a dependency on whatever assembly contains that component. But if that assembly contains other components, you not only bring them along for the ride, but all their dependencies too. Now you might say that you could split the assembly up at this point, but this is not a decision to be taken lightly - no one wants to be the one who breaks someone else's build. After all, there are other applications depending on this assembly - you don't want to be checking out their project files and modifying them just because you needed to take a dependency on something.

Conclusion

I guess I'm saying that the namespace versus assembly decision isn't quite as simple as some bloggers are making out, and the side you fall in this debate probably reveals a lot about the size and ability of the team you are working on, the working practices and tools you have at your disposal, and what sort of application you are creating (in particular, is it a single application or a whole suite of inter-related applications, and is there one customer, or many customers each with their own customised flavour of your applications).

It seems to me that much of this problem could have been solved if Visual Studio had given us a way of using netmodules. This little-known about feature of .NET allows you to compile code into pieces that can be put together into a single assembly. It would be extremely useful if you could compose a solution of many netmodules, and decide at build time whether you want to make them into one or many assemblies. That way in one application component A could share a DLL with component B, while in another application, component A could be in a DLL all by itself.