Tips + Tricks

Peer Code Reviews in a Mercurial World

Mercurial, or Hg, is a brilliant DVCS (Distributed Version Control System). Personally I think it’s much better than Git, but that’s a whole religious war in itself. If you’re not familiar with at least one of these systems, do yourself a favour and take a read of hginit.com.

Massive disclaimer: This worked for our team. Your team is different. Be intelligent. Use what works for you.

The Need for Peer Code Reviews

I’ve previously worked in a number of environments which required peer code reviews before check-ins. For those not familiar, the principle is simple – get somebody else on the team to come and validate your work before you hit the check-in button. Now, before anybody jumps up and says this is too controlling, let me highlight that this was back in the unenlightened days of centralised VCSs like Subversion and TFS.

This technique is just another tool in the toolbox for finding problems early. The earlier you find them, the quicker and cheaper they are to fix.

  • If you’ve completely misinterpreted the task, the reviewer is likely to pick up on this. If they’ve completely misinterpreted the task, it spurs a discussion between the two of you that’s likely to help them down the track.
  • Smaller issues like typos can be found and fixed immediately, rather than being relegated to P4 bug status and left festering for months on end.
  • Even if there aren’t any issues, it’s still useful as a way of sharing knowledge around the team about how various features have been implemented.

On my current project we’d started to encounter all three of these issues – we were reworking code to fit the originally intended task, letting small issues creep into our codebase and using techniques that not everybody understood. We identified these in our sprint retrospective and identified the introduction of peer code reviews as one of the techniques we’d use to counter them.

Peer Code Reviews in a DVCS World

One of the most frequently touted benefits for DVCS is that you can check-in anywhere, anytime, irrespective of network access. Whilst you definitely can, and this is pretty cool, it’s less applicable for collocated teams.

Instead, the biggest benefit I perceive is how frictionless commits enables smaller but more frequent commits. Smaller commits provide a clearer history trail, easier merging, easier reviews, and countless other benefits. That’s a story for a whole other post though. If you don’t already agree, just believe me that smaller commits are a good idea.

Introducing a requirement for peer review before each check-in would counteract these benefits by introducing friction back into the check-in process. This was definitely not an idea we were going to entertain.

The solution? We now perform peer reviews prior to pushing. Developers still experience frictionless commits, and can pull and merge as often as possible (also a good thing), yet we’ve been able to bring in the benefits of peer reviews. This approach has been working well for us for 3 weeks so far (1.5 sprints).

It’s a DVCS. Why Not Forks?

We’ve modelled our source control as a hub and spoke pattern. BitBucket has been nominated as our ‘central’ repository that is the source of truth. Generally, we all push and pull from this one central repository. Because our team is collocated, it’s easy enough to just grab the person next to you to perform the review before you push to the central repository.

Forks do have their place though. One team member worked from home this week to avoid infecting us all. He quickly spun up a private fork on BitBucket and started pushing to there instead. At regular intervals he’d ask one of us in the office for a review via Skype. Even just using the BitBucket website, it was trivial to review his pending changesets.

The forking approach could also be applied in the office. On the surface it looks like a nice idea because it means you’re not blocked waiting on a review. In practice though, it just becomes another queue of work which the other developer is unlikely to get to in as timely a manner. “Sure, I’ll take a look just after I finish this.” Two hours later, the code still hasn’t hit the central repository. The original developer has moved on to other tasks. By the time a CI build picks up any issues, ownership and focus has long since moved on. An out-of-band review also misses the ‘let’s sit and have a chat’ mentality and knowledge sharing we were looking for.

What We Tried

To kick things off, we started with hg out. The outgoing command lists all of the changesets that would be pushed if you ran hg push right now. By default it only lists the header detail of each changeset, so we’d then run though hg exp 1234, hg exp 1235, hg exp 1236, etc to review each one. The downsides to this approach were that we didn’t get colored diff outputs, we had to review them one at a time and it didn’t exclude things like merge changesets.

Next we tried hg out -p. This lists all of the outgoing changesets, in order, with their patches and full colouring. This is good progress, but we still wanted to filter out merges.

One of the cooler things about Mercurial is revsets. If you’re not familiar with them, it’d pay to take a look at hg help revsets. This allows us to use the hg log command, but pass in a query that describes which changesets we want to see listed: hg log -pr "outgoing() and not merge()".

Finally, we added a cls to the start of the command so that it was easy to just scroll back and see exactly what was in the review. This took the full command to cls && hg log -pr "outgoing() and not merge()". It’d be nice to be able to do hg log -pr "outgoing() and not merge()" | more but the more command drops the ANSI escape codes used for coloring.

What We Do Now

To save everybody from having to remember and type this command, we added a file called review.cmd to the root of our repository. It just contains this one command.

Whenever we want a review we just type review and press enter. Too easy!

One Final Tweak

When dealing with multiple repositories, you need to specify which path outgoing() applies to in the revset. We updated the contents of review.cmd to cls && hg log -pr "outgoing(%1) and not merge()". If review.cmd is called with an argument, %1 carries it through to the revset. That way we can run review or review myfork as required.

Accessing ASP.NET Page Controls During PreInit

If you’ve read my previous post explaining a common pitfall with view state, I’d hope you’re preparing all your controls in the Init event of the page/control lifecycle.

Even if I’m not reusing them through my application much, I like to factor elements like a drop down list of countries into their own control. This centralizes their logic and allows us to write clear, succinct markup like this:

<tat:CountriesDropDownList ID="AddressCountry" runat="server" />

The code for a control like this is quite simple:

[ToolboxData("<{0}:CountriesDropDownList runat=\"server\" />")]
public class CountriesDropDownList : DropDownList
{
    protected override void OnInit(EventArgs e)
    {
        DataSource = Countries;
        DataBind();

        base.OnInit(e);
    }
}

The Problem

Once you start using this encapsulation technique, it won’t be long until you want to pass in a parameter that affects the data you load. Before we do, we need to be aware that the Init event is fired in reverse order. That is, the child controls have their Init event fired before that event is fired at the parent. As such, the Page.Init event is too late for us to set any properties on the controls.

The natural solution is to try and use the Page.PreInit event, however when you do you’ll often find that your control references are all null. This happens when your page is implemented using a master page, and it relates to how master pages are implemented. The <asp:ContentPlaceHolder /> controls in a master page use the ITemplate interface to build their contents. This content (child controls) is not usually prepared until the Init event is called, which means the control references are not available. For us, this represents a problem.

The Solution

The fix is remarkably simple; all we need to do is touch the Master property on our Page and it will cause the controls to become available. If we are using nested master pages, we need to touch each master page in the chain.

I often create a file called PageExtensions.cs in my web project and add this code:

public static class PageExtensions
{
    /// <summary>
    /// Can be called during the Page.PreInit stage to make child controls available.
    /// Needed when a master page is applied.
    /// </summary>
    /// <remarks>
    /// This is needed to fire the getter on the top level Master property, which in turn
    /// causes the ITemplates to be instantiated for the content placeholders, which
    /// in turn makes our controls accessible so that we can make the calls below.
    /// </remarks>
    public static void PrepareChildControlsDuringPreInit(this Page page)
    {
        // Walk up the master page chain and tickle the getter on each one
        MasterPage master = page.Master;
        while (master != null) master = master.Master;
    }
}

This adds an extension method to the Page class, which then allows us to write code like the following:

protected override void OnPreInit(EventArgs e)
{
    this.PrepareChildControlsDuringPreInit();

    MyCustomDropDown.MyProperty = "my value";

    base.OnPreInit(e);
}

Without the call to the extension method, we would have received a NullReferenceException when trying to set the property value on the MyCustomDropDown control.

You now have one less excuse for preparing your controls during the Load event. :)

How I Learned to Stop Worrying and Love the View State

(If you don’t get the title reference, Wikipedia can explain. A more direct title could be: Understanding and Respecting the ASP.NET Page Lifecycle.)

This whole article needs a technical review. Parts of it are misleading. I’ll get back to you Barry.

Page lifecycle in ASP.NET is a finicky and rarely understood beast. Unfortunately, it’s something that we all need to get a handle on.

A common mishap that I see is code like this:

protected void Page_Load(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        AddressCountryDropDown.DataSource = CountriesList;
        AddressCountryDropDown.DataBind();
    }
}

The problem here is that we’re clogging our page’s view state. Think of view state as one of a page’s core arteries, then think of data like cholesterol. A little bit is all right, but too much is crippling.

To understand the problem, lets investigate the lifecycle that’s in play here:

  1. The Page.Init event is being fired, however we are not subscribed to that.
  2. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  3. The Page.Load event is being fired in which we are setting the contents of the drop down list. Because we are doing this after the view state has started tracking, every single entry in the drop down is being written to both the HTML and the view state.

There’s yet another problem here as well. By the time the Page.Load event is fired, all of the post back data has been loaded and processed.

To investigate the second problem, let’s investigate the lifecycle that’s in play during a post back of this same page:

  1. The user triggers the post back from their browser and all of the post back data and view state is uploaded to the server.
  2. The Page.Init event is fired, however we are not subscribed to that.
  3. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  4. The view state data is loaded for all controls. For our drop down list example, this means the Items collection is refilled using the view state that was uploaded from the browser.
  5. Post back data is processed. In our example, this means the selected item is set on the drop down list.
  6. The Page.Load event is fired however nothing happens because the developer is checking the Page.IsPostBack property. Usually, they say this is a “performance improvement” however it is also required in this scenario otherwise we would lose the selected item when we rebound the list.
  7. The contents of the drop down list are once again written to both the HTML and the view state.

How do we do this better? Removing the IsPostBack check and placing the binding code into the Init event is all we need to do:

protected override void OnInit(EventArgs e)
{
    AddressCountryDropDown.DataSource = CountriesList;
    AddressCountryDropDown.DataBind();

    base.OnInit(e);
}

What does this achieve?

  • We are filling the contents of the drop down before the Init event is fired; therefore a redundant copy of its contents is not written to the view state.
  • We are filling the contents of the drop down before the postback data is processed, so our item selection is successfully loaded without it being overridden later.
  • We have significantly reduced the size of the page’s view state.

This simple change is something that all ASP.NET developers need to be aware of. Unfortunately so many developers jumped in and wrote their first ASP.NET page using the Page_Load event (including myself). I think this is largely because it’s the one and only event handler presented to us when we create a new ASPX page in Visual Studio. While this makes the platform appear to work straight away, it produces appalling results.

Solution: IIS7 WebDAV Module Tweaks

I blogged this morning about how I think WebDAV deserves to see some more love.

I found it somewhat surprising that doing a search for iis7 webdav “invalid parameter” only surfaces 6 results, of which none are relevant. I found this particularly surprising considering “invalid parameter” is the generic message you get for most failures in the Windows WebDAV client.

I was searching for this last night after one of my subfolders stopped being returned over WebDAV, but was still browsable over HTTP. After a quick visit to Fiddler, it turned out that someone had created a file on the server with an ampersand in the name and the IIS7 WebDAV module wasn’t encoding this character properly.

It turns out that this issue, along with some other edge cases, has already been fixed. If you’re using the IIS7 WebDAV module, make sure to grab the update:

Update for WebDAV Extension for IIS 7.0 (KB955137) (x86)
Update for WebDAV Extension for IIS 7.0 (KB955137) (x64)

Because  the WebDAV module is not a shipping part of Windows, you won’t get this update through Windows Update. I hope they’ll be able to start publishing auto-updates for components like this soon.

Shout Out: WebDAV – a protocol that deserves more love.

I’m a massive fan of WebDAV.

At Fuel Advance (the parent company behind projects like Tixi), we operate a small but highly mobile work force. We don’t have an office, and we need 24/7 access to our business systems from any Internet connection. VPN links are not an option for us – they suck over 3G and don’t work through most public networks.

Enter WebDAV. It’s a set of HTTP verbs which give you read/write access to a remote folder and its files, all over standard HTTP. The best part is that Windows has native support for connecting to these shares. Now, we all have drive letter access to our corporate data over the public Internet. It’s slim and fast without all the management overheads that something like Sharepoint would have dealt us. It’s also cross platform, allowing us to open the same fileshares from our machines running Mac OS X.

IIS6 had reasonable support for WebDAV, but for various (and good!) reasons, this was dropped from the version that shipped as IIS7. In March this year, the team published a brand new WebDAV module as a separate download. This module is built using the new integrated pipeline in IIS7 and is much more nicely integrated into the management tool.

Kudos to Keith Moore, Robert McMurray and Marchel Cohn (no blog) for delivering this high quality release!