Tips + Tricks

Peer Code Reviews in a Mercurial World

Mercurial, or Hg, is a brilliant DVCS (Distributed Version Control System). Personally I think it’s much better than Git, but that’s a whole religious war in itself. If you’re not familiar with at least one of these systems, do yourself a favour and take a read of hginit.com.

Massive disclaimer: This worked for our team. Your team is different. Be intelligent. Use what works for you.

The Need for Peer Code Reviews

I’ve previously worked in a number of environments which required peer code reviews before check-ins. For those not familiar, the principle is simple – get somebody else on the team to come and validate your work before you hit the check-in button. Now, before anybody jumps up and says this is too controlling, let me highlight that this was back in the unenlightened days of centralised VCSs like Subversion and TFS.

This technique is just another tool in the toolbox for finding problems early. The earlier you find them, the quicker and cheaper they are to fix.

  • If you’ve completely misinterpreted the task, the reviewer is likely to pick up on this. If they’ve completely misinterpreted the task, it spurs a discussion between the two of you that’s likely to help them down the track.
  • Smaller issues like typos can be found and fixed immediately, rather than being relegated to P4 bug status and left festering for months on end.
  • Even if there aren’t any issues, it’s still useful as a way of sharing knowledge around the team about how various features have been implemented.

On my current project we’d started to encounter all three of these issues – we were reworking code to fit the originally intended task, letting small issues creep into our codebase and using techniques that not everybody understood. We identified these in our sprint retrospective and identified the introduction of peer code reviews as one of the techniques we’d use to counter them.

Peer Code Reviews in a DVCS World

One of the most frequently touted benefits for DVCS is that you can check-in anywhere, anytime, irrespective of network access. Whilst you definitely can, and this is pretty cool, it’s less applicable for collocated teams.

Instead, the biggest benefit I perceive is how frictionless commits enables smaller but more frequent commits. Smaller commits provide a clearer history trail, easier merging, easier reviews, and countless other benefits. That’s a story for a whole other post though. If you don’t already agree, just believe me that smaller commits are a good idea.

Introducing a requirement for peer review before each check-in would counteract these benefits by introducing friction back into the check-in process. This was definitely not an idea we were going to entertain.

The solution? We now perform peer reviews prior to pushing. Developers still experience frictionless commits, and can pull and merge as often as possible (also a good thing), yet we’ve been able to bring in the benefits of peer reviews. This approach has been working well for us for 3 weeks so far (1.5 sprints).

It’s a DVCS. Why Not Forks?

We’ve modelled our source control as a hub and spoke pattern. BitBucket has been nominated as our ‘central’ repository that is the source of truth. Generally, we all push and pull from this one central repository. Because our team is collocated, it’s easy enough to just grab the person next to you to perform the review before you push to the central repository.

Forks do have their place though. One team member worked from home this week to avoid infecting us all. He quickly spun up a private fork on BitBucket and started pushing to there instead. At regular intervals he’d ask one of us in the office for a review via Skype. Even just using the BitBucket website, it was trivial to review his pending changesets.

The forking approach could also be applied in the office. On the surface it looks like a nice idea because it means you’re not blocked waiting on a review. In practice though, it just becomes another queue of work which the other developer is unlikely to get to in as timely a manner. “Sure, I’ll take a look just after I finish this.” Two hours later, the code still hasn’t hit the central repository. The original developer has moved on to other tasks. By the time a CI build picks up any issues, ownership and focus has long since moved on. An out-of-band review also misses the ‘let’s sit and have a chat’ mentality and knowledge sharing we were looking for.

What We Tried

To kick things off, we started with hg out. The outgoing command lists all of the changesets that would be pushed if you ran hg push right now. By default it only lists the header detail of each changeset, so we’d then run though hg exp 1234, hg exp 1235, hg exp 1236, etc to review each one. The downsides to this approach were that we didn’t get colored diff outputs, we had to review them one at a time and it didn’t exclude things like merge changesets.

Next we tried hg out -p. This lists all of the outgoing changesets, in order, with their patches and full colouring. This is good progress, but we still wanted to filter out merges.

One of the cooler things about Mercurial is revsets. If you’re not familiar with them, it’d pay to take a look at hg help revsets. This allows us to use the hg log command, but pass in a query that describes which changesets we want to see listed: hg log -pr "outgoing() and not merge()".

Finally, we added a cls to the start of the command so that it was easy to just scroll back and see exactly what was in the review. This took the full command to cls && hg log -pr "outgoing() and not merge()". It’d be nice to be able to do hg log -pr "outgoing() and not merge()" | more but the more command drops the ANSI escape codes used for coloring.

What We Do Now

To save everybody from having to remember and type this command, we added a file called review.cmd to the root of our repository. It just contains this one command.

Whenever we want a review we just type review and press enter. Too easy!

One Final Tweak

When dealing with multiple repositories, you need to specify which path outgoing() applies to in the revset. We updated the contents of review.cmd to cls && hg log -pr "outgoing(%1) and not merge()". If review.cmd is called with an argument, %1 carries it through to the revset. That way we can run review or review myfork as required.

Accessing ASP.NET Page Controls During PreInit

If you’ve read my previous post explaining a common pitfall with view state, I’d hope you’re preparing all your controls in the Init event of the page/control lifecycle.

Even if I’m not reusing them through my application much, I like to factor elements like a drop down list of countries into their own control. This centralizes their logic and allows us to write clear, succinct markup like this:

<tat:CountriesDropDownList ID="AddressCountry" runat="server" />

The code for a control like this is quite simple:

[ToolboxData("<{0}:CountriesDropDownList runat=\"server\" />")]
public class CountriesDropDownList : DropDownList
{
    protected override void OnInit(EventArgs e)
    {
        DataSource = Countries;
        DataBind();

        base.OnInit(e);
    }
}

The Problem

Once you start using this encapsulation technique, it won’t be long until you want to pass in a parameter that affects the data you load. Before we do, we need to be aware that the Init event is fired in reverse order. That is, the child controls have their Init event fired before that event is fired at the parent. As such, the Page.Init event is too late for us to set any properties on the controls.

The natural solution is to try and use the Page.PreInit event, however when you do you’ll often find that your control references are all null. This happens when your page is implemented using a master page, and it relates to how master pages are implemented. The <asp:ContentPlaceHolder /> controls in a master page use the ITemplate interface to build their contents. This content (child controls) is not usually prepared until the Init event is called, which means the control references are not available. For us, this represents a problem.

The Solution

The fix is remarkably simple; all we need to do is touch the Master property on our Page and it will cause the controls to become available. If we are using nested master pages, we need to touch each master page in the chain.

I often create a file called PageExtensions.cs in my web project and add this code:

public static class PageExtensions
{
    /// <summary>
    /// Can be called during the Page.PreInit stage to make child controls available.
    /// Needed when a master page is applied.
    /// </summary>
    /// <remarks>
    /// This is needed to fire the getter on the top level Master property, which in turn
    /// causes the ITemplates to be instantiated for the content placeholders, which
    /// in turn makes our controls accessible so that we can make the calls below.
    /// </remarks>
    public static void PrepareChildControlsDuringPreInit(this Page page)
    {
        // Walk up the master page chain and tickle the getter on each one
        MasterPage master = page.Master;
        while (master != null) master = master.Master;
    }
}

This adds an extension method to the Page class, which then allows us to write code like the following:

protected override void OnPreInit(EventArgs e)
{
    this.PrepareChildControlsDuringPreInit();

    MyCustomDropDown.MyProperty = "my value";

    base.OnPreInit(e);
}

Without the call to the extension method, we would have received a NullReferenceException when trying to set the property value on the MyCustomDropDown control.

You now have one less excuse for preparing your controls during the Load event. :)

How I Learned to Stop Worrying and Love the View State

(If you don’t get the title reference, Wikipedia can explain. A more direct title could be: Understanding and Respecting the ASP.NET Page Lifecycle.)

This whole article needs a technical review. Parts of it are misleading. I’ll get back to you Barry.

Page lifecycle in ASP.NET is a finicky and rarely understood beast. Unfortunately, it’s something that we all need to get a handle on.

A common mishap that I see is code like this:

protected void Page_Load(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        AddressCountryDropDown.DataSource = CountriesList;
        AddressCountryDropDown.DataBind();
    }
}

The problem here is that we’re clogging our page’s view state. Think of view state as one of a page’s core arteries, then think of data like cholesterol. A little bit is all right, but too much is crippling.

To understand the problem, lets investigate the lifecycle that’s in play here:

  1. The Page.Init event is being fired, however we are not subscribed to that.
  2. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  3. The Page.Load event is being fired in which we are setting the contents of the drop down list. Because we are doing this after the view state has started tracking, every single entry in the drop down is being written to both the HTML and the view state.

There’s yet another problem here as well. By the time the Page.Load event is fired, all of the post back data has been loaded and processed.

To investigate the second problem, let’s investigate the lifecycle that’s in play during a post back of this same page:

  1. The user triggers the post back from their browser and all of the post back data and view state is uploaded to the server.
  2. The Page.Init event is fired, however we are not subscribed to that.
  3. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  4. The view state data is loaded for all controls. For our drop down list example, this means the Items collection is refilled using the view state that was uploaded from the browser.
  5. Post back data is processed. In our example, this means the selected item is set on the drop down list.
  6. The Page.Load event is fired however nothing happens because the developer is checking the Page.IsPostBack property. Usually, they say this is a “performance improvement” however it is also required in this scenario otherwise we would lose the selected item when we rebound the list.
  7. The contents of the drop down list are once again written to both the HTML and the view state.

How do we do this better? Removing the IsPostBack check and placing the binding code into the Init event is all we need to do:

protected override void OnInit(EventArgs e)
{
    AddressCountryDropDown.DataSource = CountriesList;
    AddressCountryDropDown.DataBind();

    base.OnInit(e);
}

What does this achieve?

  • We are filling the contents of the drop down before the Init event is fired; therefore a redundant copy of its contents is not written to the view state.
  • We are filling the contents of the drop down before the postback data is processed, so our item selection is successfully loaded without it being overridden later.
  • We have significantly reduced the size of the page’s view state.

This simple change is something that all ASP.NET developers need to be aware of. Unfortunately so many developers jumped in and wrote their first ASP.NET page using the Page_Load event (including myself). I think this is largely because it’s the one and only event handler presented to us when we create a new ASPX page in Visual Studio. While this makes the platform appear to work straight away, it produces appalling results.

Solution: IIS7 WebDAV Module Tweaks

I blogged this morning about how I think WebDAV deserves to see some more love.

I found it somewhat surprising that doing a search for iis7 webdav “invalid parameter” only surfaces 6 results, of which none are relevant. I found this particularly surprising considering “invalid parameter” is the generic message you get for most failures in the Windows WebDAV client.

I was searching for this last night after one of my subfolders stopped being returned over WebDAV, but was still browsable over HTTP. After a quick visit to Fiddler, it turned out that someone had created a file on the server with an ampersand in the name and the IIS7 WebDAV module wasn’t encoding this character properly.

It turns out that this issue, along with some other edge cases, has already been fixed. If you’re using the IIS7 WebDAV module, make sure to grab the update:

Update for WebDAV Extension for IIS 7.0 (KB955137) (x86)
Update for WebDAV Extension for IIS 7.0 (KB955137) (x64)

Because  the WebDAV module is not a shipping part of Windows, you won’t get this update through Windows Update. I hope they’ll be able to start publishing auto-updates for components like this soon.

Shout Out: WebDAV – a protocol that deserves more love.

I’m a massive fan of WebDAV.

At Fuel Advance (the parent company behind projects like Tixi), we operate a small but highly mobile work force. We don’t have an office, and we need 24/7 access to our business systems from any Internet connection. VPN links are not an option for us – they suck over 3G and don’t work through most public networks.

Enter WebDAV. It’s a set of HTTP verbs which give you read/write access to a remote folder and its files, all over standard HTTP. The best part is that Windows has native support for connecting to these shares. Now, we all have drive letter access to our corporate data over the public Internet. It’s slim and fast without all the management overheads that something like Sharepoint would have dealt us. It’s also cross platform, allowing us to open the same fileshares from our machines running Mac OS X.

IIS6 had reasonable support for WebDAV, but for various (and good!) reasons, this was dropped from the version that shipped as IIS7. In March this year, the team published a brand new WebDAV module as a separate download. This module is built using the new integrated pipeline in IIS7 and is much more nicely integrated into the management tool.

Kudos to Keith Moore, Robert McMurray and Marchel Cohn (no blog) for delivering this high quality release!

Tip: Learn through sharing

Finula sent out an email yesterday asking a group of us to supply some tips for being a successful developer. The short versions will get included in this afternoon’s MSDN Flash newsletter, and we’re each blogging our full response.

My personal advice is to recognize the power of learning through sharing. Coatesy touched on the idea at the end of his own tip.

Attending user groups is one thing, but getting up and contributing your own knowledge is what really drives these groups. You’ll be pleasantly surprised at how much you’ll actually learn about a technology if you step back and prepare a presentation for your peers which explains what that technology is, and how it works.

One of my work mates, Steve Godbold, recently delivered a presentation about LINQ. In prepping for the talk he knew he’d need to explain expression trees, and he guessed he’d get some questions about LINQ to SQL vs LINQ to Entities. Pleasantly shocked about how much more there was to know about LINQ, he’s know turned this prep into a series of blog posts too.

Presenting isn’t for everyone though, but this is where blogging steps in. It might seem a bit egotistical at first to think that people want to read what you have to say, but the reality is people genuinely do! Think about the number of times you’ve ended up reading somebody’s blog post before to help you solve a problem. Posts don’t need to be technical wizardry to warrant publishing either – it’s often the simple little tricks that people find real value in. One of my more popular posts describes how to do a hover effect in CSS. It also opens up your ideas for others to comment on, which might prompt something you’d never thought of before.

Hopefully this will give you a bit of inspiration to get up and really participate in the vibrant technical community around you.

What now? If you haven’t already, subscribe to MSDN Flash, submit your 10 minute demo for the chance to win big prizes, contribute to your local user group, and start a blog.

What’s your tip for being a great developer?

iiNet, in the tale of "Yet another company which has failed to scale and are now neglecting their customers"

Update 1, 1:20pm: I’m moving to Internode. After consulting with Twitter, @corneliu and @DavidBurela both suggested Internode. @snagy suggested a T1 complain with the communications ombudsman. iiNet – you will be hearing from me soon to start processing my cancellations. Don’t even dare charging me for the last 6 weeks of anti-service.

Update 2, 3:55pm: After emailing this link to the iiNet MD, I rather quickly got a phone call. Lets see how this gets handled … I’ll keep you all posted.

Update 3, 6:05pm: Telstra tech booked to investigate Exchange problem. Target resolution date is tomorrow. Although apparently there’s also a major disruption at the moment that could delay my resolution.

Update 4, 9:24am next morning: Internet resolved. :) Turns out the MD had passed my request to the “Business Improvement Team”. They said that they’ve read the points below and will take them onboard in their next monthly review cycle. I genuinely tried all the proper channels, but when that failed I changed tact. Never underestimate the power of Twitter, a blog and the odd email to an MD.

I’ve been a passionate iiNet customer for 2.5 years now. I’ve paid almost $12,000 across my multiple contracts, and recommended them to countless customers. (I signed another one up just last fortnight.)

Unfortunately, they have quickly become just another character in the tale of organisations that failed to scale. Nobody cares anymore. Nobody listens. This used to be what they were great at.

In the six weeks since I moved in to my apartment I’ve had 1 week of internet.

I acknowledge that ADSL is a flakey technology at the best of times, but the one month rigmarole to get connected could have definitely happened faster. This was a one-off pain though, so that I accepted.

A week after getting up and running I lost line sync at 10:50am on the 17th May. That’s 10 days ago now.

We have agreed that it’s not my modem – I’ve tried two. We have agreed that it’s not a problem between my wall socket and my MDF – that was inspected during the first internet hiatus. We have agreed that it’s not a problem between my MDF and the exchange – through a combination of Vision Stream and Telstra we also resolved this during the first internet hiatus. Finally, we agreed that the problem is my with DSLAM port at the exchange.

You’d think with all this information they could have fixed this by now.

Their engineers did go out an investigate the problem, and fixed something. They then closed the fault. Did anybody call me to check if it was actually resolved? No. Was it actually resolved? No.

My fault has now been lodged four unique times. I have four different fault reference numbers. I have had to explain my problem eight times now to eight different CSRs (Customer Service Reps).

From an organisational perspective, this could be solved relatively easily:

  1. Introduce a new state of “Complete – Awaiting Verification”. This does not mean “Closed”.
  2. Send customers and SMS when their fault enters the “Complete – Awaiting Verification” state. A message like, “Your fault, 44564172 is now complete but waiting verification. Please call 13 22 58 within 24 hours if your problem continues.”
  3. Introduce a new state of “Reopened – Resolution Denied”.
  4. If a customer responds to the SMS to say that they are still experiencing the problem, move the fault to “Reopened – Resolution Denied” instead of opening a brand new fault.
  5. If customers do not reply within 24 hours, allow the fault to automatically progress from “Complete – Awaiting Verification” to “Closed”.
  6. Prioritise the fault resolution queue by lodgment date. This means that if a fault is reopened 5 days after it was lodged, it is still considered to be 5 days old. The current processes consider it as a new fault and prioritise it accordingly (ie. not at all).

If my fourth fault is once again closed without actually being fixed, it will be time for me to say goodbye to iiNet. So long, and thanks for all the fish.