Technical

Released: ReliabilityPatterns – a circuit breaker implementation for .NET

How to get it

Library: Install-Package ReliabilityPatterns (if you’re not using NuGet already, start today)

Source code: hg.tath.am/reliability-patterns

What it solves

In our homes, we use circuit breakers to quickly isolate an electrical circuit when there’s a known fault.

Michael T. Nygard introduces this concept as a programming pattern in his book Release It!: Design and Deploy Production-Ready Software.

The essence of the pattern is that when one of your dependencies stops responding, you need to stop calling it for a little while. A file system that has exhausted its operation queue is not going to recover while you keep hammering it with new requests. A remote web service is not going to come back any faster if you keep opening new TCP connections and mindlessly waiting for the 30 second timeout. Worse yet, if your application normally expects that web service to respond in 100ms, suddenly starting to block for 30s is likely to deteriorate the performance of your own application and trigger a cascading failure.

Electrical circuit breakers ‘trip’ when a high current condition occurs. They then need to be manually ‘reset’ to close the circuit again.

Our programmatic circuit breaker will trip after an operation has more consecutive failures than a predetermined threshold. While the circuit breaker is open, operations will fail immediately without even attempting to be executed. After a reset timeout has elapsed, the circuit breaker will enter a half-open state. In this state, only the next call will be allowed to execute. If it fails, the circuit breaker will go straight back to the open state and the reset timer will be restarted. Once the service has recovered, calls will start flowing normally again.

Writing all this extra management code would be painful. This library manages it for you instead.

How to use it

Taking advantage of the library is as simple as wrapping your outgoing service call with circuitBreaker.Execute:

// Note: you'll need to keep this instance around
var breaker = new CircuitBreaker();

var client = new SmtpClient();
var message = new MailMessage();
breaker.Execute(() => client.SendEmail(message));

The only caveat is that you need to manage the lifetime of the circuit breaker(s). You should create one instance for each distinct dependency, then keep this instance around for the life of your application. Do not create different instances for different operations that occur on the same system.

(Managing multiple circuit breakers via a container can be a bit tricky. I’ve published a separate example for how to do it with Autofac.)

It’s generally safe to add this pattern to existing code because it will only throw an exception in a scenario where your existing code would anyway.

You can also take advantage of built-in retry logic:

breaker.ExecuteWithRetries(() => client.SendEmail(message), 10, TimeSpan.FromSeconds(20));

Why is the package named ReliabilityPatterns instead of CircuitBreaker?

Because I hope to add more useful patterns in the future.

This blog post in picture form

Sequence diagram

.NET Rocks! #687: ‘Tatham Oddie Makes HTML 5 and Silverlight Play Nice Together’

I spoke to Carl + Richard on .NET Rocks! last week about using HTML5 and Silverlight together. We also covered a bit of Azure toward the end.

The episode is now live here:

http://www.dotnetrocks.com/default.aspx?showNum=687

JT – Sorry, I referred to you as “the other guy I presented with” and never intro-ed you. 😦

Everyone else – JT is awesome.

Peer Code Reviews in a Mercurial World

Mercurial, or Hg, is a brilliant DVCS (Distributed Version Control System). Personally I think it’s much better than Git, but that’s a whole religious war in itself. If you’re not familiar with at least one of these systems, do yourself a favour and take a read of hginit.com.

Massive disclaimer: This worked for our team. Your team is different. Be intelligent. Use what works for you.

The Need for Peer Code Reviews

I’ve previously worked in a number of environments which required peer code reviews before check-ins. For those not familiar, the principle is simple – get somebody else on the team to come and validate your work before you hit the check-in button. Now, before anybody jumps up and says this is too controlling, let me highlight that this was back in the unenlightened days of centralised VCSs like Subversion and TFS.

This technique is just another tool in the toolbox for finding problems early. The earlier you find them, the quicker and cheaper they are to fix.

  • If you’ve completely misinterpreted the task, the reviewer is likely to pick up on this. If they’ve completely misinterpreted the task, it spurs a discussion between the two of you that’s likely to help them down the track.
  • Smaller issues like typos can be found and fixed immediately, rather than being relegated to P4 bug status and left festering for months on end.
  • Even if there aren’t any issues, it’s still useful as a way of sharing knowledge around the team about how various features have been implemented.

On my current project we’d started to encounter all three of these issues – we were reworking code to fit the originally intended task, letting small issues creep into our codebase and using techniques that not everybody understood. We identified these in our sprint retrospective and identified the introduction of peer code reviews as one of the techniques we’d use to counter them.

Peer Code Reviews in a DVCS World

One of the most frequently touted benefits for DVCS is that you can check-in anywhere, anytime, irrespective of network access. Whilst you definitely can, and this is pretty cool, it’s less applicable for collocated teams.

Instead, the biggest benefit I perceive is how frictionless commits enables smaller but more frequent commits. Smaller commits provide a clearer history trail, easier merging, easier reviews, and countless other benefits. That’s a story for a whole other post though. If you don’t already agree, just believe me that smaller commits are a good idea.

Introducing a requirement for peer review before each check-in would counteract these benefits by introducing friction back into the check-in process. This was definitely not an idea we were going to entertain.

The solution? We now perform peer reviews prior to pushing. Developers still experience frictionless commits, and can pull and merge as often as possible (also a good thing), yet we’ve been able to bring in the benefits of peer reviews. This approach has been working well for us for 3 weeks so far (1.5 sprints).

It’s a DVCS. Why Not Forks?

We’ve modelled our source control as a hub and spoke pattern. BitBucket has been nominated as our ‘central’ repository that is the source of truth. Generally, we all push and pull from this one central repository. Because our team is collocated, it’s easy enough to just grab the person next to you to perform the review before you push to the central repository.

Forks do have their place though. One team member worked from home this week to avoid infecting us all. He quickly spun up a private fork on BitBucket and started pushing to there instead. At regular intervals he’d ask one of us in the office for a review via Skype. Even just using the BitBucket website, it was trivial to review his pending changesets.

The forking approach could also be applied in the office. On the surface it looks like a nice idea because it means you’re not blocked waiting on a review. In practice though, it just becomes another queue of work which the other developer is unlikely to get to in as timely a manner. “Sure, I’ll take a look just after I finish this.” Two hours later, the code still hasn’t hit the central repository. The original developer has moved on to other tasks. By the time a CI build picks up any issues, ownership and focus has long since moved on. An out-of-band review also misses the ‘let’s sit and have a chat’ mentality and knowledge sharing we were looking for.

What We Tried

To kick things off, we started with hg out. The outgoing command lists all of the changesets that would be pushed if you ran hg push right now. By default it only lists the header detail of each changeset, so we’d then run though hg exp 1234, hg exp 1235, hg exp 1236, etc to review each one. The downsides to this approach were that we didn’t get colored diff outputs, we had to review them one at a time and it didn’t exclude things like merge changesets.

Next we tried hg out -p. This lists all of the outgoing changesets, in order, with their patches and full colouring. This is good progress, but we still wanted to filter out merges.

One of the cooler things about Mercurial is revsets. If you’re not familiar with them, it’d pay to take a look at hg help revsets. This allows us to use the hg log command, but pass in a query that describes which changesets we want to see listed: hg log -pr "outgoing() and not merge()".

Finally, we added a cls to the start of the command so that it was easy to just scroll back and see exactly what was in the review. This took the full command to cls && hg log -pr "outgoing() and not merge()". It’d be nice to be able to do hg log -pr "outgoing() and not merge()" | more but the more command drops the ANSI escape codes used for coloring.

What We Do Now

To save everybody from having to remember and type this command, we added a file called review.cmd to the root of our repository. It just contains this one command.

Whenever we want a review we just type review and press enter. Too easy!

One Final Tweak

When dealing with multiple repositories, you need to specify which path outgoing() applies to in the revset. We updated the contents of review.cmd to cls && hg log -pr "outgoing(%1) and not merge()". If review.cmd is called with an argument, %1 carries it through to the revset. That way we can run review or review myfork as required.

Released: FormsAuthenticationExtensions

What it does

Think about a common user table. You probably have a GUID for each user, but you want to show their full name and maybe their email address in the header of each page. This commonly ends up being an extra DB hit (albeit hopefully cached).

There is a better way though! A little known gem of the forms authentication infrastructure in .NET is that it lets you embed your own arbitrary data in the ticket. Unfortunately, setting this is quite hard – upwards of 15 lines of rather undiscoverable code.

Sounds like a perfect opportunity for another NuGet package.

How to get it

Library: Install-Package FormsAuthenticationExtensions

(if you’re not using NuGet already, start today)

Source code: formsauthext.codeplex.com

How to use it

Using this library, all you need to do is add:

 using FormsAuthenticationExtensions; 

then change:

 FormsAuthentication.SetAuthCookie(user.UserId, true); 

to:

 var ticketData = new NameValueCollection {
    { "name", user.FullName },
    { "emailAddress", user.EmailAddress }
 };
new FormsAuthentication().SetAuthCookie(user.UserId, true, ticketData);

Those values will now be encoded and persisted into the authentication ticket itself. No need to store it in any form of session state, custom cookies or extra DB calls.

To read the data out at a later time:

 var ticketData = ((FormsIdentity) User.Identity).Ticket.GetStructuredUserData();
var name = ticketData["name"];
var emailAddress = ticketData["emailAddress"];

If you want something even simpler, you can also just pass a string in:

 new FormsAuthentication().SetAuthCookie(user.UserId, true, "arbitrary string here"); 

and read it back via:

 var userData = ((FormsIdentity) User.Identity).Ticket.UserData; 

Things to Consider

Any information you store this way will live for as long as the ticket.

That can be quite a while if users are active on your application for long periods of time, or if you give out long-term persistent sessions.

Whenever one of the values stored in the ticket needs to change, all you need to do is call SetAuthCookie again with the new data and the cookie will be updated accordingly. In our user name / email address example, this is actually quite advantageous. If the user was to update their display name or email address, we’d just update the ticket with new values. This updated ticket would then be supplied for future requests. In web farm environments this is about as perfect as it gets – we don’t need to go back to the DB to load this information for each request, yet we don’t need to worry about invalidating the cache across machines. (Any form of shared, invalidatable cache in a web farm is generally bad.)

Size always matters.

The information you store this way is embedded in the forms ticket, which is then encrypted and sent back to the users browser. On every single request after this, that entire cookie gets sent back up the wire and decrypted. Storing any significant amount of data here is obviously going to be an issue. Keep it to absolutely no more than a few simple values.

Twavatar – coming to a NuGet server near you

Yet another little micro-library designed to do one thing, and do it well:

twavatar.codeplex.com

Install-Package twavatar

I’ve recently been working on a personal project that lets me bookmark physical places.

To avoid having to build any of the authentication infrastructure, I decided to build on top of Twitter’s identity ecosystem. Any user on my system has a one-to-one mapping back to a Twitter account. Twitter get to deal with all the infrastructure around sign ups, forgotten passwords and so forth. I get to focus on features.

The other benefit I get is being able to easily grab an avatar image and display it on the ‘mark’ page like this:

image

(Sidenote: You might also notice why I recently built relativetime and crockford-base32.)

Well, it turns out that grabbing somebody’s Twitter avatar isn’t actually as easy as one might hope. The images are stored on Amazon S3 under a URL structure that requires you to know the user’s Twitter Id (the numeric one) and the original file name of the image they uploaded. To throw another spanner in the works, if the user uploads a new profile image, the URL changes and the old one stops working.

For most Twitter clients this isn’t an issue because the image URL is returned as part of the JSON blob for each status. In our case, it’s a bit annoying though.

Joe Stump set out to solve this problem by launching tweetimag.es. This service lets you use a nice URL like http://img.tweetimag.es/i/tathamoddie_n and let them worry about all the plumbing to make it work. Thanks Joe!

There’s a risk though … This is a free service, with no guarantees about its longevity. As such, I didn’t want to hardcode too many dependencies on it into my website.

This is where we introduce Twavatar. Here’s what my MVC view looks like:

 @Html.TwitterAvatar(Model.OwnerHandle) 

Ain’t that pretty?

We can also ask for a specific size:

 @Html.TwitterAvatar(Model.OwnerHandle, Twavatar.Size.Bigger) 

The big advantage here is that if / when tweetimag.es disappears, I can just push an updated version of Twavatar to NuGet and everybody’s site can keep working. We’ve cleanly isolated the current implementation into its own library.

It’s scenarios like this where NuGet really shines.

Update 1: Paul Jenkins pointed out a reasonably sane API endpoint offered by Twitter in the form of http://api.twitter.com/1/users/profile_image/tathamoddie?size=bigger. There are two problems with this API. First up, it issues a 302 redirect to the image resource rather than returning the data itself. This adds an extra DNS resolution and HTTP round trip to the page load. Second, the documentation for it states that it “must not be used as the image source URL presented to users of your application” (complete with the bold). To meet this requirement you’d need to call it from your application server-side, implement your own caching and so forth.

The tweetimag.es service most likely uses this API under the covers, but they do a good job of abstracting all the mess away from us. If the tweetimag.es service was ever to be discontinued, I imagine I’d update Twavatar to use this API directly.

Released: RelativeTime

Ruby has a nifty little function called time_ago_in_words. You pass it an arbitrary number of seconds and it gives you back something friendly like “about 2 weeks ago”.

Today, I implemented a similar routine for .NET.

relativetime.codeplex.com

nuget.org/List/Packages/relativetime

To use it, just include the namespace, then call ToHumanTime() on a TimeSpan object.

If you want more of an idea of what it generates, take a look at the test suite.