Technical

Talk Resources – Internet Explorer 9 for Developers

At REMIX10, TechEd AU 2010 and TechEd NZ 2010 I’ve been showing some of what’s new in Internet Explorer 9 for developers.

Here are the slides and code: http://db.tt/JvEUu3o

The recording from TechEd New Zealand (the third and best version!) is available here: http://www.msteched.com/2010/NewZealand/WEB304

IE9NZ

The recording from TechEd Australia (version 2 of the talk) is available here: http://www.msteched.com/2010/Australia/WEB204

IE9AU

And finally, here’s a recording from REMIX10 Australia (version 1 of the talk): http://www.microsoft.com/australia/remix/videos/default.aspx

IERemix

If you’ve attended any of these talks, thank you for your feedback! The session evals at conferences are like crack for speakers. We read every single one, and then we read them again.

– Tats

Scoop! C#’s new #until directive

(Disclaimer: This post is about a C# language feature I’d like to see, not one that actually exists. Once the feature gets added, the title will be accurate and I’ll be able the world’s most pro-active blogger. :))

Update 1: Added another approach

I’m trying to evolve a framework here at my current client. There are some 30+ solutions and an unknown (to me) number of developers dependent upon this framework. As such, I can’t go and make breaking changes without everybody’s CI build dieing and me getting escorted from the building.

Even when I do have access to all the code in one solution, I prefer a three pass approach of:

  1. implementing new functionality and bridging old functionality but marking it obsolete
  2. cleaning up all the build warnings triggered by the [Obsolete] attributes
  3. going back and deleting the obsolete code now that nothing depends on it anymore

Starting with that approach, I have some code like this:

[Obsolete("Use SomeOtherProperty instead.")]
public SomeType SomeProperty { get; set; }

public SomeOtherType SomeOtherProperty { get; set; }

In theory, each team will then drive down their build warnings over time.

In practice, I’ve never seen people do this very excitedly and I have no hope of driving all these down myself.

What I want to do is offer a fixed grace period something like this:

[Obsolete("Use SomeOtherProperty instead. This member will be removed on 1st Aug 2010.")]
public SomeType SomeProperty { get; set; }

public SomeOtherType SomeOtherProperty { get; set; }

This gives each team a known grace period to update their usages, and then forces them after that. (It’s basically two release cycles.)

The problem is that I want to have these members die automatically once this date arrives. (I may or may not be here, etc.)

Approach #1

I came up with this:

#until 2010-08-01
    [Obsolete("Use SomeOtherProperty instead. This member will be removed on 1st Aug 2010.")]
    public SomeType SomeProperty { get; set; }
#enduntil

public SomeOtherType SomeOtherProperty { get; set; }

Basically, that code block is only compiled up until 1st August. As soon as that date ticks around, the member is magically nuked from the build and the lazy downstream users start getting compile errors. The simple syntax of this also makes it easy for me to run some PowerShell + regular expressions over the framework codebase on a regular basis and remove the actual source code.

Unfortunately, C# doesn’t include the #until directive yet and I doubt Anders is going to give me a custom compiler build any time soon. :)

Approach #2

My next idea was to create a numeric version of the date and then using a basic conditional compilation directive:

#if DATESERIAL < 20100801
    [Obsolete("Use SomeOtherProperty instead. This member will be removed on 1st Aug 2010.")]
    public SomeType SomeProperty { get; set; }
#endif

public SomeOtherType SomeOtherProperty { get; set; }

I’d then include something in the build script that adds the current date as a symbol (eg. csc.exe /define:DATESERIAL=20100801).

Unfortunately, symbols are just symbols (duh) and thus don’t have values. Also, the pre-processor ‘expressions’ used in #if only support basic boolean expressions.

Approach #3

My next idea was to make the dates less granular and define a series of symbols for the last 3 months or so. For example, a build run today 17th June 2010 would be executed like so – with a symbol for April, May and June:

csc.exe /define:OBSOLETE_201004;OBSOLETE_201005;OBSOLETE_201006

The code could then look like this:

#if OBSOLETE_201005
    [Obsolete("Use SomeOtherProperty instead. This member will be removed on 1st Aug 2010.")]
    public SomeType SomeProperty { get; set; }
#endif

public SomeOtherType SomeOtherProperty { get; set; }

As soon as September ticks around, the OBSOLETE_201005 symbol will fall off the list and voila, the member dies.

This approach is basically flagging the date that we marked something obsolete (in this example, May 2010) and then allowing the build process to determine which ones are in and which ones are one.

I don’t like the approach for a few reasons:

  • it means that the directive isn’t as clear (it’s the date we indicated the change, not the date it’s going to take effect)
  • the message in the attribute can potentially become wrong (say we decided to include four months’ worth of obsolete changes instead of three, all the messages will now be out by one month)
  • all of the members are now forced on to the same attrition cycle – I can’t spread the ‘easier’ ones on to one cycle and the ‘harder’ ones on to a longer cycle

Approach #4

Let’s go back and evolve the syntax from approach #1:

//#until 2010-08-01
    [Obsolete("Use SomeOtherProperty instead. This member will be removed on 1st Aug 2010.")]
    public SomeType SomeProperty { get; set; }
//#enduntil

public SomeOtherType SomeOtherProperty { get; set; }

All I’ve done is add the comment indicator to the start of each of the directives so that they still look like directives but the compiler doesn’t try and process them.

I was already planning to have a PowerShell script that I could use to find the stale code after it had passed its used by date. Keeping the syntax simple makes it easy to find the blocks via regular expressions, so this would be quite easy to do.

I could run this same script at the start of the build:

  1. Create a workspace for the build
  2. Run the PS script across it to remove any expired code
  3. Run the compiler

The problem with this approach is that it clobbers your code. This works fine on a build server where you’re creating a new workspace for every build. It doesn’t work so well in your local environment, and that’s just yucky. This doesn’t affect the downstream consumers (they only get binaries) however it kind of sucks for the framework team.

Approach #5

(This is inspired from Simon’s response.)

Bringing this all back into C#, we can move the onus on to the framework team. First up, lets add a custom attribute to the member:

[ValidUntil(2010, 08, 01)]
[Obsolete("Use SomeOtherProperty instead. This member will be removed on 1st Aug 2010.")]
public SomeType SomeProperty { get; set; }

public SomeOtherType SomeOtherProperty { get; set; }

Now, the framework build could include a unit test that uses reflection to find all the instances of this attribute and evaluate the dates. If the date is in the past, the unit test fails and the framework build fails. The framework team would then identify the build break and delete the now expired code.

Approach #6

Feel free to suggest. :)

Talk Resources – Riding the Geolocation Wave

At both the REMIX10 conference in Melbourne, Australia and more recently TechEd New Zealand I presented on geolocation for developers.

This was the abstract:

It’s pretty obvious by now that geolocation is a heavy player in the next wave of applications and APIs. Now is the time to learn how to take advantage of this information and add context to your own applications. In this session we’ll look at geolocation at every layer of the stack – from open protocols to operating system APIs, from the browser to Windows Phone 7. Building a compelling geo-enabled experience takes more than simple coordinates. In this session Tatham will introduce the basics of determining a user’s location and then delve into some of the opportunities and restrictions that are specific to mobile devices and their interfaces.

The talk was filmed at TechEd New Zealand, and is available for download here: http://www.msteched.com/2010/NewZealand/WEB205

(Note: this version has a Windows Phone 7 demo in it too.)

GeoNZScreenshot

The first version of the talk was also filmed at REMIX10, and is available for download here: http://www.microsoft.com/australia/remix/videos/default.aspx

GeolocationScreenshot

Here are some links to the code and resources (but you really want to watch the talk first):

(Post last updated 7th Sep 2010 with new links and videos)

Web Forms Model-View-Presenter on Hanselminutes

Over the last few months Damian Edwards and myself have been spending quite a bit of time building out a Model-View-Presenter framework for ASP.NET Web Forms.

Until now we’ve been pretty quiet about it all on our blogs because we were busy polishing off v1 and trying to get all the documentation in order. Nevertheless, the word has definitely started to spread as Scott Hanselman interviewed me about the library on this week’s Hanselminutes episode.

Listen to the podcast

Learn more about the library

Custom Code Analysis Rules in VS2010 (and how to make them run in FxCop and VS2008 too)

Back in 2002 Microsoft released FxCop, a static code analysis tool. At the time it was shipped as a separate product and received a bit of buzz. It used .NET reflection and a series of pre-defined rules to detect and report coding issues that wouldn’t normally be picked up by the compiler. Since this initial release, FxCop has undergone an amazing amount of work and become more mainstream with its integration into Visual Studio under the title of ‘Code Analysis’.

Recently I’ve been developing some custom extensions to FxCop – my own code analysis rules. While extremely powerful, this isn’t yet a fully documented or supported scenario. Until it is, this post shows you how to do it all.

Why should we care?

Lately I’ve been working on the ASP.NET Web Forms Model-View-Presenter framework. It’s not quite ready for launch yet, which is why I haven’t been blogging about it, but it is already in use by a number of high traffic websites. As more and more people have started to adopt the project in its relative infancy, documentation hasn’t been up to standard. To try and keep everybody in line I contemplated writing up some ‘best practices’ documentation but then figured that this probably wouldn’t get as much attention as it should and had a high chance of rapidly becoming stale.

Code analysis rules were the perfect solution. They would allow me to define a series of best practices for use of the library in a way that could be applied across multiple projects by the developers themselves. Code analysis rules are also great because they produce a simple task list of things to fix – something that appeals to developers and managers alike.

Over the course of developing these rules I’ve increasingly come to realise that custom rules are something that should be considered in any major project – even if it’s not a framework that will be redistributed. All projects (should) have some level of consistency in their architecture. The details of this are often enforced through good practice and code reviews, but from time to time things slip through. In the same way that we write unit tests to validate our work, I think we should be writing code analysis rules. Think of them like an “architectural validity test” or something.

The Basics

A quick note about versioning: First we’ll create some rules in VS2010, to be executed in VS2010. Later in the post we’ll look at how to compile these same rules in a way that makes them compatible with FxCop 1.36 (and thus VS2008). If you’re only targeting VS2008 then all the same concepts will apply but you’ll be able to skip a few steps.

  1. Start with a new class library project. Make sure you choose to target “.NET Framework 4”, even if the rest of your solution is targeting an earlier framework. Because we’re going to be loading these rules inside VS2010, and it uses .NET 4.0, we need to use it too.

    New Class Library using .NET Framework 4 

  2. Add references to FxCopSdk.dll, Microsoft.Cci.dll and Microsoft.VisualStudio.CodeAnalysis.dll. You’ll usually find these in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop, or an equivalent location. Don’t worry about copying these libraries to a better location or anything – we’ll look at a smarter way of referencing them shortly. (If you’re doing this in VS2008, you’ll need to download and install FxCop 1.36 first and then find these references in that folder. Also, you’ll only need the first two.)
  3. Add a new XML file to your project called Rules.xml. This will be a manifest file that describes each of our individual rules. To get us started, paste in the following content:

    <?xml version="1.0" encoding="utf-8" ?>
    <Rules FriendlyName="My Custom Rules">
      <Rule TypeName="AllTypeNamesShouldEndInFoo" Category="CustomRules.Naming" CheckId="CR1000">
        <Name>All type names should end in 'Foo'</Name>
        <Description>I like all of my types to end in 'Foo' so that I know they're a type.</Description>
        <Url>http://foobar.com</Url>
        <Resolution>The name of type {0} does not end with the suffix 'Foo'. Add the suffix to the type name.</Resolution>
        <MessageLevel Certainty="95">Warning</MessageLevel>
        <FixCategories>Breaking</FixCategories>
        <Email />
        <Owner />
      </Rule>
    </Rules>
    

    This XML file is pretty self-explanatory, but there are a few things I should point out:

    The type name needs to match the name of the class that we define the actual rule in, so make it appropriate (don’t use special characters, use Pascal casing, etc).

    The check id must be unique within the namespace of your rules, but really should be unique across the board. Microsoft uses the letters “CA” followed by a four digit number, and we use a similar scheme for Web Forms MVP.

    The resolution message is stored in the XML here, and not in your own code, but you want it to be as specific as possible so that the developer on the receiving end of it knows exactly what they need to do. Use it like a formatting string – you’ll soon see that it works really nicely.

  4. Go to the properties for the XML file and change the Build Action to EmbeddedResource so that it gets compiled into our DLL.

    Build Action: Embedded Resource

  5. Create a class called BaseRule and paste in the following code:

    using Microsoft.FxCop.Sdk;
    
    public abstract class BaseRule : BaseIntrospectionRule
    {
        protected BaseRule(string name)
            : base(
    
                // The name of the rule (must match exactly to an entry
                // in the manifest XML)
                name,
    
                // The name of the manifest XML file, qualified with the
                // namespace and missing the extension
                typeof(BaseRule).Assembly.GetName().Name + ".Rules",
    
                // The assembly to find the manifest XML in
                typeof(BaseRule).Assembly)
        {
        }
    }
    

    There are three pieces of information we’re passing into the base constructor here. The first is the type name of the rule which the framework will use to find the corresponding entry in the manifest, the second is the namespace qualified resource name of the manifest file itself and the last is the assembly that the manifest is stored in. I like to create this base class because the last two arguments will be the same for all of your rules and it gets ugly repeating them at the top of each rule.

  6. Create a class called AllTypeNamesShouldEndInFoo and paste in the following stub code:

    using Microsoft.FxCop.Sdk;
    using Microsoft.VisualStudio.CodeAnalysis.Extensibility;
    
    public class AllTypeNamesShouldEndInFoo : BaseRule
    {
        public AllTypeNamesShouldEndInFoo()
            : base("AllTypeNamesShouldEndInFoo")
        {
        }
    }
    

That’s all of the boilerplate code in place. Before we start writing the actual rule, let’s take a brief detour to the world of introspection.

Um … ‘introspection’?

The first version of FxCop used basic .NET reflection to weave its magic. This approach is relatively simple, familiar to most developers and was a quick-to-market solution for them. As FxCop grew, this approach couldn’t scale though. Reflection has two main problems: First and foremost, it only lets you inspect the signatures of types and members – there’s no way to look inside a method and see what other methods it’s calling or to identify bad control flows. Reflection also inherits a major restriction from the underlying framework – once loaded into an app domain, and assembly can’t be unloaded. This restriction wreaks havoc in scenarios where developers want to be able to rapidly rerun the tests; having to restart FxCop every time isn’t the most glamorous of development experiences.

At this point we could fall back to inspecting the original source code, but that comes with a whole bunch of parsing nightmares and ultimately ties us back to a particular language. CIL is where we want to be.

Later versions of FxCop started using an introspection engine. This provided a fundamentally different experience, light-years ahead of what reflection could provide. The introspection engine performs all of its own CIL parsing which means that it can be pointed at any .NET assembly without having to load that assembly into the runtime. Code can be inspected without ever having the chance of being executed. The same assembly can be reloaded as many times as we want. Better yet, we can explore from the assembly level right down to individual opcodes and control structures through a unified API.

Jason Kresowaty has published a nice write up of the introspection engine. Even cooler yet, he has released a tool called Introspector which allows us to visualise the object graph that the introspection engine gives us. I highly recommend that you download it before you get into any serious rules development.

Introspector

Back to our rule…

Now that we know some of the basics of introspection, we’re ready to start coding our own rule. As a reminder, this is what we have so far:

using Microsoft.FxCop.Sdk;
using Microsoft.VisualStudio.CodeAnalysis.Extensibility;

public class AllTypeNamesShouldEndInFoo : BaseRule
{
    public AllTypeNamesShouldEndInFoo()
        : base("AllTypeNamesShouldEndInFoo")
    {
    }
}

The FxCop runtime manages the process of ‘walking’ the assembly for us. It will visit every node that it needs to, but no more, and it’ll do it across multiple threads. All we need to do is tell the runtime which nodes we’re interested in. To do this, we override one of the many Check methods.

As much as possible, use the most specific override that you can as this will give FxCop a better idea of what you’re actually looking at and thus provide better feedback to the end user. For example, if you want to look at method names don’t override Check(TypeNode) and enumerate the methods yourself because any violations you raise will be raised against the overall type. Instead, override Check(Member member).

In our scenario, because we want to check type names, we’ll override Check(TypeNode type).

The actual code for this rule is quite simple:

public override ProblemCollection Check(TypeNode type)
{
    if (!type.Name.Name.EndsWith("Foo", StringComparison.Ordinal))
    {
        var resolution = GetResolution(type.Name.Name);
        var problem = new Problem(resolution, type)
                          {
                              Certainty = 100,
                              FixCategory = FixCategories.Breaking,
                              MessageLevel = MessageLevel.Warning
                          };
        Problems.Add(problem);
    }

    return Problems;
}

All we’re doing here is checking the name of the type, and then adding a problem to a collection on the base type. The GetResolution method acts like string.Format and takes an array of parameters then formats them into the resolution text we defined in the XML file.

The second argument that we pass to the Problem constructor is the introspection node that the problem relates to. In this case it’s just the type itself, but if we were doing our own enumeration then we would pass the most specific node possible here so that FxCop could return the most accurate source reference possible to the end user.

Let’s start ‘er up.

At the time of writing, the latest standalone version of FxCop is 1.36 which still targets .NET 2.0 – 3.5. Because we’ve written our rule in .NET 4.0, our only option is to test it within Visual Studio. Luckily, that’s not as hard as it sounds. (If you’re writing your rules in VS2008, jump over this section.)

  1. Create another class library in your solution called TestLibrary. We won’t put any real code in here – we’re just going to use it as the library to execute our rules against.
  2. Add a new Code Analysis Rule Set file to the project:

    New Code Analysis Rule Set

  3. When the file opens in the designer you’ll see a list of all the built-in rules. Because custom rules aren’t really supported yet, there’s no nice way of adding our own rules into this list.

    Default Rules

  4. In Solution Explorer, right click on the .ruleset file, choose Open With and select XML Editor from the options. This will show you the raw contents of the file, which is currently pretty boring. To point Visual Studio in the direction of your custom rules, you then add a series of hint paths.

    This is what my rule set XML looks like:

    <?xml version="1.0" encoding="utf-8"?>
    <RuleSet Name="New Rule Set" Description="" ToolsVersion="10.0">
      <RuleHintPaths>
        <Path>C:\Temp\CARules\BlogDemo\BlogDemo.CodeAnalysisRules\bin\Debug</Path>
      </RuleHintPaths>
    </RuleSet>
    

    Hint paths can be absolute, or relative to the location of the rule set file. They should point at the exact folder that your compiled rules sit in. Because Visual Studio fails silently if it can’t load a rule, I prefer to start with an absolute folder path first, then change it to a relative path once everything is working.

  5. Make sure you have compiled your rules project, then go back to Solution Explorer, right click on the .ruleset file, choose Open With and select Code Analysis Rule Set Editor.

    (If you have file locking issues, close Visual Studio, delete all of your bin folders, reopen the solution, build the rules project, then attempt to open the Code Analysis Rule Set Editor again.)

Now, you should see your custom rule loaded into the list:

6CustomRules

Running the rule is now easy. Open the project properties for your test library project, go to the Code Analysis tab, enable Code Analysis and select our new rule set:

7EnableCodeAnalysis

Now when we build the project, the output from our new rule will appear in the Errors List just like any of the default rules:

8ErrorList

A Bit of Clean-up

Back when we first created the project file for our rules we referenced a couple of DLLs from a system location. This isn’t very maintainable, particularly in a team environment, so let’s clean that up quickly.

  1. Right click on the rules project and select “Unload Project”
  2. Right click on the rules project again and select “Edit .csproj” – this will show you the raw XML definition for the project
  3. Find these three references:

    <Reference Include="FxCopSdk">
      <HintPath>..\..\..\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\FxCopSdk.dll</HintPath>
      <Private>False</Private>
    </Reference>
    <Reference Include="Microsoft.Cci">
      <HintPath>..\..\..\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\Microsoft.Cci.dll</HintPath>
      <Private>False</Private>
    </Reference>
    <Reference Include="Microsoft.VisualStudio.CodeAnalysis, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL">
      <SpecificVersion>False</SpecificVersion>
      <HintPath>..\..\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\Microsoft.VisualStudio.CodeAnalysis.dll</HintPath>
      <Private>True</Private>
    </Reference>
    

    And replace them with this:

    <Reference Include="FxCopSdk">
      <HintPath>$(CodeAnalysisPath)\FxCopSdk.dll</HintPath>
      <Private>False</Private>
    </Reference>
    <Reference Include="Microsoft.Cci">
      <HintPath>$(CodeAnalysisPath)\Microsoft.Cci.dll</HintPath>
      <Private>False</Private>
    </Reference>
    <Reference Include="Microsoft.VisualStudio.CodeAnalysis">
      <HintPath>$(CodeAnalysisPath)\Microsoft.VisualStudio.CodeAnalysis.dll</HintPath>
      <Private>False</Private>
    </Reference>
    

    The build system populates the $(CodeAnalysisPath) variable for us automatically. This way, our references will be valid on every developer’s machine.

  4. Save and close the file, then right click the project and select “Reload Project”

Do the shuffle. The two-step, multi-framework shuffle…

For Web Forms MVP we want to support users on both VS2008 and VS2010. The work we’ve done so far in this post is all exclusively targeted towards VS2010 and not compatible with VS2008 or FxCop 1.36.

To make the compiled rules compatible with both IDEs we’ll need to compile two different versions of it. The VS2008 version will use .NET 3.5 and only two references while the VS2010 version will use .NET 4 and a third reference, Microsoft.VisualStudio.CodeAnalysis.

  1. Right click on the rules project and select “Unload Project”
  2. Right click on the rules project again and select “Edit .csproj” – this will show you the raw XML definition for the project
  3. Find both your Debug and Release property groups and add a DEV10 constant to each:

    <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
      ...
      <DefineConstants>TRACE;DEBUG;CODE_ANALYSIS;DEV10</DefineConstants>
      ...
    </PropertyGroup>
    <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
      ...
      <DefineConstants>TRACE;DEV10</DefineConstants>
      ...
    </PropertyGroup>
    
  4. Find the Microsoft.VisualStudio.CodeAnalysis reference and make it conditional based on the framework version being compiled against:

    <Reference Include="Microsoft.VisualStudio.CodeAnalysis" Condition=" '$(TargetFrameworkVersion)' == 'v4.0' ">
      <HintPath>$(CodeAnalysisPath)\Microsoft.VisualStudio.CodeAnalysis.dll</HintPath>
      <Private>False</Private>
    </Reference>
    
  5. Save and close the file, then right click the project and select “Reload Project”
  6. Go to AllTypeNamesShouldEndInFoo.cs and wrap the using statement for Microsoft.VisualStudio.CodeAnalysis.Extensibility in an #if construct like so:

    using System;
    using Microsoft.FxCop.Sdk;
    #if DEV10
        using Microsoft.VisualStudio.CodeAnalysis.Extensibility;
    #endif
    
  7. Make sure that your project still compiles with VS2010

At this point our project is still only building for VS2010 but it now contains all of the hook points we need to perform a second build for VS2008. The reference to Microsoft.VisualStudio.CodeAnalysis.dll will only be included if we’re building against .NET 4 and the using statements will only be compiled if the DEV10 compilation constant is present.

Normally, we would build the project using a simple call to MSBuild (which is exactly what VS2010 does under the covers):

MSBuild "BlogDemo.CodeAnalysisRules.csproj" /p:Configuration=Release /maxcpucount

To compile the FxCop 1.36 version, we just pass some extra arguments:

MSBuild  BlogDemo.CodeAnalysisRules.csproj " /p:Configuration=Release /maxcpucount /p:CodeAnalysisPath="..\Dependencies\FxCop136\" /p:DefineConstants="" /p:TargetFrameworkVersion="v3.5"

The CodeAnalysisPath parameter is normally supplied by MSBuild, but we are now overriding it with the location of the FxCop 1.36 SDK. We’re also overriding TargetFrameworkVersion.

Of course, there are nicer ways to script the build process using technologies like PowerShell. The ZIP file below contains a nice little Build-CARules.ps1 which you can use as a template.

The Resources

Download all of the sample code from this blog post and a PowerShell build script here:

Download CodeAnalysisRulesBlogDemo.zip

Video: Building Fast, Public Websites

Following up from my last post about the ASP.NET MVC vs ASP.NET WebForms debate, we’ve had a second TechTalk posted, also from TechEd Australia. In this video, Michael Kordahi, Damian Edwards and I sat down to discuss building fast, public websites. It was a bit of a teaser for our breakout session at the conference, which will be available online as a screencast in the next week or two.

If you’re interested in learning more about building large public websites on ASP.NET, remember that the full video from our recent REMIX session is still available online too.

Building Fast, Public Websites

Watch Online or Download

Video: ASP.NET MVC vs ASP.NET WebForms – Will WebForms be replaced by MVC?

At the recent TechEd Australia conference, Paul Glavich, Damian Edwards and myself sat down to discuss what we thought about the current MVC vs WebForms debate. Our TechTalk has now been posted on the TechEd Online site, and available for anyone to watch.

Check it out, and feel free to continue the debate with any of us. :)

ASP.NET MVC vs ASP.NET WebForms – Will WebForms be replaced by MVC? 

Watch Online or Download

Testing the world (and writing better quality code along the way)

Working with the awesome team on my current project, I’ve come to the realisation that I never really did understand automated testing all that well. Sure, I’d throw around words like “unit test”, then write a method with a [TestMethod] attribute on it and voila, I was done; right? Hell no I wasn’t!

Recently, I challenged myself to write an asynchronous TCP listener, complete with proper tests. This felt like a suitable challenge because it combined the inherent complexities of networking with the fun of multi-threading. This article is about what I learnt. I trust you’ll learn something too.

What type of test is that?

The first key thing to understand is exactly what type of test you are writing. Having fallen in to the craze of unit testing as NUnit hit the ground, I naturally thought of everything as a unit test and dismissed the inherent differences of integration tests.

  • A unit test should cover the single, smallest chunk of your logic possible. It must never touch on external systems like databases or web services. It should test one scenario, and test it well. Trying to cover too many scenarios in a single test introduces fragility into the test suite, such that one breaking change to your logic could cascade through and cause tens or even hundreds of tests to fail in one go.
  • An integration test tests the boundaries and interactions between your logic and its external systems. It depend on external services and will be responsible for establishing the required test data, running the test, then cleaning up the target environment. This citizenship on the test’s behalf allows it to be rerun reliably as many times as you want, a key component of a test being considered valuable.

I always dismissed the differences as being subtle elements of language and left it for the TDD hippies to care about. Unfortunately, they were right – it does matter. Now, let’s spend the rest of the article building a TCP listener that is testable without having to use an integration test. Yes, you heard me right.

The Problem Space

As a quick introduction to networking in .NET, this is how you’d accept a connection on port 25 and write a message back:

var listener = new TcpListener(IPAddress.Any, 25);

using (var client = listener.AcceptTcpClient())
using (var stream = client.GetStream())
using (var streamWriter = new StreamWriter(stream))
{
   streamWriter.Write("Hello there!");
}

The first line attaches to the port, then AcceptTcpClient() blocks the code until we have a client to talk to.

In our challenge, we want to be able to talk to two clients at once so we need to take it up a notch and accept the connection asynchronously:

static void Main(string[] args)
{
  var listener = new TcpListener(IPAddress.Any, 25);
  listener.BeginAcceptTcpClient(new AsyncCallback(AcceptClient), listener); 

  Console.ReadLine();
} 

static void AcceptClient(IAsyncResult asyncResult)
{
  var listener = (TcpListener)asyncResult.AsyncState; 

  using (var client = listener.EndAcceptTcpClient(asyncResult))
  using (var stream = client.GetStream())
  using (var streamWriter = new StreamWriter(stream))
  {
    streamWriter.Write("Hello there!");
  }
}

If you’ve looked at asynchronous delegates in .NET before, this should all be familiar to you. We’re using a combination of calls to BeginAcceptTcpClient and EndAcceptTcpClient to capture the client asynchronously. The AcceptClient method is passed to the BeginAcceptTcpClient method as our callback delegate, along with an instance of the listener so that we can use it later. When a connection becomes available, the AcceptClient method will be called. It will extract the listener from the async state, then call EndAcceptTcpClient to get the actual client instance.

Already, we’re starting to introduce some relatively complex logic into the process by which we accept new connections. This complexity is exactly why I want to test the logic – so that I can be sure it still works as I continue to add complexity to it over the life of the application.

Split ‘Em Down The Middle

To start cleaning this up, I really need to get my connection logic out of my core application. Keeping the logic separate from the hosting application will allow us to rehost it in other places, like our test harness.

Some basic separation can be introduced using a simple wrapper class:

class Program
{
  static void Main(string[] args)
  {
    var listener = new TcpListener(IPAddress.Any, 25); 

    var smtpServer = new SmtpServer(listener);
    smtpServer.Start(); 

    Console.ReadLine();
  }
}

class SmtpServer
{
  readonly TcpListener listener; 

  public SmtpServer(TcpListener listener)
  {
    this.listener = listener;
  } 

  public void Start()
  {
    listener.BeginAcceptTcpClient(new AsyncCallback(AcceptClient), listener);
  } 

  static void AcceptClient(IAsyncResult asyncResult)
  {
    var listener = (TcpListener)asyncResult.AsyncState; 

    using (var client = listener.EndAcceptTcpClient(asyncResult))
    using (var stream = client.GetStream())
    using (var streamWriter = new StreamWriter(stream))
    {
      streamWriter.Write("Hello there!");
    }
  }
}

Now that we’ve separated the logic, it’s time to start writing a test!

Faking It ‘Till You Make It

The scenario we need to test is that our logic accepts a connection, and does so asynchronously. For this to happen, we need to make a client connection available that our logic can connect to.

Initially this sounds a bit complex. Maybe we could start an instance of the listener on a known port, then have our test connect to that port? The problem with this approach is that we’ve ended up at an integration test and the test is already feeling rather shaky. What happens if that port is in use? How do we know that we’re actually connecting to our app? How do we know that it accepted the connection asynchronously? We don’t.

By faking the scenario we can pretend to have a client available and then watch how our logic reacts. This is called ‘mocking’ and is typically achieved using a ‘mocking framework’. For this article, I’ll be using the wonderful Rhino Mocks framework.

This is how we could mock a data provider that normally calls out to SQL:

var testProducts = new List<Product>
{
  new Product { Title = "Test Product 123" },
  new Product { Title = "Test Product 456" },
  new Product { Title = "Test Product 789" }
}; 

var mockDataProvider = MockRepository.GenerateMock<IDataProvider>();
mockDataProvider.Expect(a => a.LoadAllProducts()).Return(testProducts); 

var products = mockDataProvider.LoadAllProducts();
Assert.AreEqual(3, products.Count());

mockDataProvider.VerifyAllExpectations();

This code doesn’t give any actual test value, but it does demonstrate how a mock works. Using the interface of IDataProvider, we ask the mock repository to produce a concrete class on the fly. Defining an expectation tells mock repository how it should react when we call LoadAllProducts. Finally, on the last line of the code we verify that all of our expectations held true.

In this case, we are dynamically creating a class that implements IDataProvider and returns a list of three products when LoadAllProducts is called. On the last line of the code we are verifying that LoadAllProducts has been called as we expected it to be.

Artificial Evolution

Now, this approach is all well and good when you have an interface to work with, but how do we apply that to System.Net.Sockets.TcpListener? We need to modify the structure of the instance such that it implements a known interface; this is exactly what the adapter pattern is for.

First up, we need to define our own interface. Because we need to mock both the listener and the client, we’ll actually define two:

public interface ITcpListener
{
  IAsyncResult BeginAcceptTcpClient(AsyncCallback callback, object state);
  ITcpClient EndAcceptTcpClient(IAsyncResult asyncResult);
} 

public interface ITcpClient
{
  NetworkStream GetStream();
  IPEndPoint RemoteIPEndPoint { get; }
}

To apply these interfaces to the existing .NET Framework implementations, we write some simple adapter classes like so:

public class TcpListenerAdapter : ITcpListener
{
  private TcpListener Target { get; set; } 

  public TcpListenerAdapter(TcpListener target)
  {
    Target = target;
  } 

  public IAsyncResult BeginAcceptTcpClient(AsyncCallback callback, object state)
  {
    return Target.BeginAcceptTcpClient(callback, state);
  } 

  public ITcpClient EndAcceptTcpClient(IAsyncResult asyncResult)
  {
    return new TcpClientAdapter(Target.EndAcceptTcpClient(asyncResult));
  }
}

public class TcpClientAdapter : ITcpClient
{
  private TcpClient Target { get; set; } 

  public TcpClientAdapter(TcpClient target)
  {
    Target = target;
  } 

  public NetworkStream GetStream()
  {
    return Target.GetStream();
  } 

  public IPEndPoint RemoteIPEndPoint
  {
    get { return Target.Client.RemoteEndPoint as IPEndPoint; }
  }
}

These classes are solely responsible for implementing our custom interface and passing the actual work down to an original target instance which we pass in through the constructor. You might notice that line 17 of the code uses an adapter itself.

With some simple tweaks to our SmtpServer class, and how we call it, our application will continue to run as before. This is how I’m now calling the SmtpServer:

static void Main(string[] args)
{
  var listener = new TcpListener(IPAddress.Any, 25);
  var listenerAdapter = new TcpListenerAdapter(listener);

  var smtpServer = new SmtpServer(listenerAdapter);
  smtpServer.Start();

  Console.ReadLine();
}

The key point to note is that when once we have created the real listener, we are now wrapping it in an adapter before passing it down to the SmtpServer constructor. This satisfies the SmtpServer which would now be expecting an ITcpListener instead of a concrete TcpListener as it did before.

Talking The Talk

At this point in the process we have:

  1. Separated the connection acceptance logic into its own class, outside of the hosting application
  2. Defined an interface for how a TCP listener and client should look, without requiring concrete implementations of either
  3. Learnt how to generate mock instance from an interface

The only part left is the actual test:

[TestMethod]
public void ShouldAcceptConnectionAsynchronously()
{
  var client = MockRepository.GenerateMock<ITcpClient>();
  var listener = MockRepository.GenerateMock<ITcpListener>();
  var asyncResult = MockRepository.GenerateMock<IAsyncResult>();

  listener.Expect(a => a.BeginAcceptTcpClient(null, null)).IgnoreArguments().Return(asyncResult);
  listener.Expect(a => a.EndAcceptTcpClient(asyncResult)).Return(client); 

  var smtpServer = new SmtpServer(listener);
  smtpServer.Start();

  var arguments = listener.GetArgumentsForCallsMadeOn(a => a.BeginAcceptTcpClient(null, null));
  var callback = arguments[0][0] as AsyncCallback;
  var asyncState = arguments[0][1];
  asyncResult.Expect(a => a.AsyncState).Return(asyncState);

  callback(asyncResult);

  client.VerifyAllExpectations();
  listener.VerifyAllExpectations();
  asyncResult.VerifyAllExpectations();
}

Ok, lets break that one down a step at a time, yeah?

The first three lines are just about generated some mocked instances for each of the objects we’re going to need along the way:

var client = MockRepository.GenerateMock<ITcpClient>();
var listener = MockRepository.GenerateMock<ITcpListener>();
var asyncResult = MockRepository.GenerateMock<IAsyncResult>();

Next up, we define how we expect the listener to work. When the BeginAcceptTcpClient method is called, we want to return the mocked async result. Similarly, when EndAcceptTcpClient is called, we want to return the mocked client instance.

listener.Expect(a => a.BeginAcceptTcpClient(null, null)).IgnoreArguments().Return(asyncResult);
listener.Expect(a => a.EndAcceptTcpClient(asyncResult)).Return(client);

Now that we’ve done our setup work, we run our usual logic just like we do in the hosting application:

var smtpServer = new SmtpServer(listener);
smtpServer.Start();

At this point, our logic will have spun up and run called the BeginAcceptTcpClient method. Because it is asynchronous, it will be patiently waiting until a client becomes available before it does any more work. To kick it along we need to fire the async callback delegate that is associated with the async action. Being internal to the implementation, we can’t (and shouldn’t!) just grab a reference to it ourselves but we can asking the mocking framework:

var methodCalls = listener.GetArgumentsForCallsMadeOn(a => a.BeginAcceptTcpClient(null, null));
var firstMethodCallArguments = methodCalls.Single();
var callback = firstMethodCallArguments[0] as AsyncCallback;
var asyncState = firstMethodCallArguments[1];
asyncResult.Expect(a => a.AsyncState).Return(asyncState);

The RhinoMocks framework has kept a recording of all the arguments that have been passed in along the way, and we’re just querying this list to find the first (and only) method call. While we have the chance, we also push our async state from the second argument into the async result instance.

Armed with a reference to the callback, we can fire away and simulate a client becoming available:

callback(asyncResult);

Finally, we ask RhinoMocks to verify that everything happened under the covers just like we expected. For example, if we had defined any expectations that never ended up getting used, RhinoMocks would throw an exception for us during the verification.

client.VerifyAllExpectations();
listener.VerifyAllExpectations();
asyncResult.VerifyAllExpectations();

Are We There Yet?

We are!

Taking a quick score check, we have:

  1. Separated the connection acceptance logic into its own class, outside of the hosting application
  2. Defined an interface for how a TCP listener and client should look, without requiring concrete implementations of either
  3. Used mocking to write a unit test to validate that our logic correctly accepts a new client asynchronously

Having done so, you should now:

  1. Understand the difference between a unit test and an integration test
  2. Understand the importance of separation of concerns and interfaces when it comes to writing testable (and maintainable!) code
  3. Understand how the adapter pattern works, and why it is useful
  4. Understand the role of a mocking framework when writing tests

Was this article useful? Did you learn something? Tell me about it!

Solution: IIS7 WebDAV Module Tweaks

I blogged this morning about how I think WebDAV deserves to see some more love.

I found it somewhat surprising that doing a search for iis7 webdav “invalid parameter” only surfaces 6 results, of which none are relevant. I found this particularly surprising considering “invalid parameter” is the generic message you get for most failures in the Windows WebDAV client.

I was searching for this last night after one of my subfolders stopped being returned over WebDAV, but was still browsable over HTTP. After a quick visit to Fiddler, it turned out that someone had created a file on the server with an ampersand in the name and the IIS7 WebDAV module wasn’t encoding this character properly.

It turns out that this issue, along with some other edge cases, has already been fixed. If you’re using the IIS7 WebDAV module, make sure to grab the update:

Update for WebDAV Extension for IIS 7.0 (KB955137) (x86)
Update for WebDAV Extension for IIS 7.0 (KB955137) (x64)

Because  the WebDAV module is not a shipping part of Windows, you won’t get this update through Windows Update. I hope they’ll be able to start publishing auto-updates for components like this soon.

Shout Out: WebDAV – a protocol that deserves more love.

I’m a massive fan of WebDAV.

At Fuel Advance (the parent company behind projects like Tixi), we operate a small but highly mobile work force. We don’t have an office, and we need 24/7 access to our business systems from any Internet connection. VPN links are not an option for us – they suck over 3G and don’t work through most public networks.

Enter WebDAV. It’s a set of HTTP verbs which give you read/write access to a remote folder and its files, all over standard HTTP. The best part is that Windows has native support for connecting to these shares. Now, we all have drive letter access to our corporate data over the public Internet. It’s slim and fast without all the management overheads that something like Sharepoint would have dealt us. It’s also cross platform, allowing us to open the same fileshares from our machines running Mac OS X.

IIS6 had reasonable support for WebDAV, but for various (and good!) reasons, this was dropped from the version that shipped as IIS7. In March this year, the team published a brand new WebDAV module as a separate download. This module is built using the new integrated pipeline in IIS7 and is much more nicely integrated into the management tool.

Kudos to Keith Moore, Robert McMurray and Marchel Cohn (no blog) for delivering this high quality release!