nik codes

Squeezing the Most Into the New W3C Beacon API

The Setup

It’s common for many websites to build a signaling mechanism that, without user action, sends analytics or diagnostics information back to a server for further analysis. I’ve created one at least a half a dozen times to capture all sorts of information: JavaScript errors, browser and device capabilities, client side click paths, the list goes on and on. In fact, the list is actually getting longer with the W3C’s Web Performance Working Group cranking out lots of great Real User Metrics (RUM) specifications for in-browser performance diagnostics like Navigation Timing, Resource Timing, User Timing and the forthcoming Navigation Error Logging and Frame Timing.

The signaling code, often called a beacon, has traditionally been implemented in many different ways:

  • A JavaScript based timer which regularly and repeatedly fires AJAX requests to submit the latest data gathered.
  • Writing special cookies that become attached to the next “natural” request the browser makes and special server side processing code. 
  • Synchronous requests made during unload. (Browsers usually ignore asynchronous requests made during unload, so they can’t be trusted.)
  • Tracking “pixels”; small non-AJAX requests with information encoded into URL parameters.
  • 3rd party solutions, like Google Analytics, which internally leverage one of the options listed above.

Unfortunately, each of these techniques have downsides. Either the amount of data that can be transferred is severely limited, or the act of sending it has negative affects on performance. We need a better way, and that’s where the W3C’s new Beacon API comes into play.

The Solution

With the new Beacon API, data can be posted to the server during the browsers unload event, without blocking the browser, in a performant manner. The code is rather simple and works as expected:

window.addEventListener('unload', function () {
      var rum = {
              navigation: performance.timing,
              resources: performance.getEntriesByType('resource'),
              marks: performance.getEntriesByType('mark'),
              measures: performance.getEntriesByType('measure')
          };
      rum = reduce(rum);
      navigator.sendBeacon('/rum/submit', JSON.stringify(rum));
}, false);

The Catch

Unfortunately, as of this writing, the Beacon API is not as widely supported as you’d hope. Chrome 39+, Firefox 31+ and Opera 26+ currently support the API. It isn’t supported in Safari and the Internet Explorer team has it listed as “Under Consideration”.

The other catch, and this is the biggie to me, stems from this note about navigator.sendBeacon() in the spec:

If the User Agent limits the amount of data that can be queued to be sent using this API and the size of data causes that limit to be exceeded, this method returns false.

The specification allows the browser to refuse to send the beacon data (thus returning false) if it deems you’re trying to send too much. At this point, Chrome is the only browser that limits the amount of data that can be sent. Its limit is set at right around 16KB.

A Workaround?

To be fair, 16KB sure seems like a lot of data, and it is, but I’ve found myself in the situation where I was unable to beacon back diagnostics information on heavy pages because they had just too much Resource Timing data to send. Being unable to send diagnostics data on the worst performing pages really misses the point of the working group’s charter. Further, this problem will only get worse as more diagnostics information becomes available via all the RUM specifications I mentioned at the top of this post. That said, I’ve implemented several ways to reduce a beacon’s payload size without actually losing or giving up any data:

1. Use DOMString over FormData

The Beacon API allows you to submit four data types: ArrayBufferView, Blob, DOMString or FormData. Given that we want to submit RUM data, FormData and DOMString are the only two we can use. (ArrayBufferView and Blob are for working with arrays of typed numeric data and raw file-like objects.)

FormData seems like a natural way to go, particularly because model binding engines in frameworks like ASP.NET MVC and Rails work directly with them. However, you’ll save a few bytes by using a DOMString and accessing the request body directly on the server.

For simplicity in both encoding and parsing, I encode the data via JSON. (Though you could try a more exotic format for larger gains.) On the server, with JSON.NET you can parse the request body directly like this:

var serializer = new JsonSerializer();
Rum rum;
using (var sr = new StreamReader(Request.InputStream))
using (var tr = new JsonTextReader(sr))
{
     rum = serializer.Deserialize<Rum>(tr);
}

2. Make Fewer HTTP Requests

My beacon payload size issues arose on pages that had lots of resources (images, scripts, stylesheets, etc) to download, which yielded very large arrays of Resource Timing objects. Reducing the number of HTTP requests that the page was making (by combing scripts and stylesheets and using image sprites) not only helps with page performance, but also reduced the amount of data provided by the Resource Timing API which in turn reduces beacon payload sizes.

3. Use Positional Values

As mentioned above, The Resource Timing API yields an array of objects. The User Timing API does the same thing. The problem with JSON encoding arrays of objects is that all the keys for each key/value pair is repeated over and over again for each array item. This repetition adds up quite quickly.

Instead, I use a simpler array of arrays structure in which individual values are referenced by position. Here’s the JavaScript to convert from a User Timing API array of objects to an array of arrays:

// convert to [name, duration, startTime]
rum.marks = rum.marks.map(function (e) { 
     return [e.name, e.duration, e.startTime]; 
});
 
// convert to [name, duration, startTime] 
rum.measures = rum.measures.map(function (e) { 
     return [e.name, e.duration, e.startTime]; 
});

On the server I use a custom JSON.NET converter to parse the positional values:

public class UserTimingConverter : JsonConverter
{
     public override object ReadJson(JsonReader reader, 
                                     Type objectType, 
                                     object existingValue, 
                                     JsonSerializer serializer)
     {
         var array = JArray.Load(reader);
         return new UserTiming
         {
             Name = array[0].ToString(),
             Duration = array[1].ToObject<double>(),
             StartTime = array[2].ToObject<double>()
         };
     }
     // ...
}

4. Derive Data on Client

Depending on the requirements, it may be feasible to send fewer values by making some simple derivations on the client. Why send both domainLookupEnd and domainLookupStart if all that’s required is subtracting one from the other to get the domainLookupTime? The more that’s derived on client, the less raw data to send across the wire.

5. Shorten URL’s

Resource Timing data, in particular, contains a lot of often redundant URL strings. There’s many strategies to reduce URL redundancy:

  1. If all the data is being served from the same host, strip the domain and scheme from the URL entirely. (Basically make it a relative URL.)
    IE: http://domain.com/content/images/logo.png becomes /content/images/logo.png
  2. Shorten common segments into “macros” of limited characters that can be re-expand later.
    IE: /content/images/logo.png becomes /{ci}/logo.png
  3. The folks at Akami, who gather tons of Resource Timing data, leverage a tree like structure to reduce redundancy even more. IE:
    {
         "http://": {
             "domain.com/": {
                 "content/style.css": [ /* array of values */ ],
                 "content/images/": {
                     "logo.png": [ /* array of values */ ],
                     "sprite.png": [ /* array of values */ ]
                 }
             }
         }
    }

6. Leverage HTTP Headers

Not all data needs to be included in the beacon payload itself. The server can still gather some diagnostics information from the standard HTTP headers from the beacon’s request. These include things like:

  • Referrer
  • UserAgent for browser and device information
  • Application specific user data from cookies
  • Environment specific data via X-Forwarded-For and other similar headers
  • IP Address, and thus approximate geographical location (not technically a header)
  • Date and Time (also not a header, but calculated easily on server)

With this collection of techniques, you should be able to squeeze a little more out of the Beacon API. If you’ve found another way to shave off a few bytes, let me know in the comments.

Developer Arcade

During the holiday season I find myself reflecting over the previous year and celebrating various traditions with family and friends alike.

One of my fondest Christmas memories was back in ‘96 when my brother and I got a Nintendo 64. I remember the two of us playing Wave Race 64 together in his room all night long.

Because of this memory I tend to think about video games, which I don’t typically play, in the lead up to Christmas. I often have a desire to unwind playing them, just a little bit, in an attempt to relive those simpler days.

arcade

So this year, I began looking for games that I could play without any guilt. I looked for games that would make me a better software developer, and I found a ton! So if you find yourself bored for a few hours over the holiday break, why not try one of these games out and improve your skills?

Txt Based Games

  • Typing.io: Improve your touch typing skills with this simple little game made specifically for programmers who have to type a lot more curly braces and angle brackets than the standard Mavis Beacon player.
  • VIM Adventures: Keep your fingers moving in this Zelda-like overhead scroller. You’ll need to learn VIM commands, motions and operators to control your character to victory.
  • Terminus: Learn to navigate your way around the Unix shell with common commands in this Zork like text adventure.

Design Eye

  • What The Color?: How well do you understand the #hex, rbg() and hsl() color space used in CSS? This game from Lea Verou times how long it takes for you to guess a particular color. Hint: HSL is much easier to navigate once you get used to it!
  • What px?: Very similar to What The Color, What px? challenges you with CSS length units by guessing the width of various elements.
  • The Bezier Game: Learn how to use the Pen tool, common in SVG/vector editing software, in this advanced connect-the-dots style game.

Web Development

  • HTML5 Element Quiz: Quite possibly my favorite of the games, this quiz challenges you to name as many HTML5 elements as you can in five minutes. As a life long web dev, I was surprised how many I wasn’t able to remember – but even better, I learned about many new ones I’ve never used before.
  • CSS Diner: I’m also a big fan of this game. CSS Diner teaches you CSS3 query selector syntax by graphically having you select items from a bento box that have been arranged on a dining room table.
  • XSS: Learn to think like a hacker with this security game from Google. In each level you are presented with an increasing “secure” web page, as well as tips to figure out how to exploit the page with some cross site scripting. This game is really eye opening!

Languages

  • CodeCombat: A very complete video game, with music, sound effects, animations and graphical styling’s, CodeCombat will teach you Python, JavaScript, CoffeeScript, Clojure or Lua by getting you to write simple little programs that lead your character through each level. Think of it as Logo with modern day bells and whistles. This one seems perfect for the kids in your life.
  • Regex Golf: Presented with both a whitelist and a blacklist of strings, can you write an expression that matches all of the whitelist, and none of the blacklist? Each level teaches you a little bit more about Regex operators.
  • Learn Some SQL: Created by a few co-workers of mine at Red Gate, Learn Some SQL shows you a table of data and you have to write the SQL statement that would select the same dataset.

Miscellaneous

  • Learn Git Branching: Very similar in style to Learn Some SQL, this game shows you a diagram of a Git repository, and you have to branch, merge, rebase and commit your repository into the matching shape. I’ve shown this game to people learning Git and it really helped them.

I hope you enjoy one or two of these games as well as your holiday time. Post your high scores in the comments below – along with any games that are missing from the list.

Happy Holidays!

PerfMatters.Flush now CourtesyFlush

Its been nearly a month since I released PerfMatters.Flush on NuGet. The library has been getting people talking about performance and thinking how to improve their web applications.

Unfortunately for me, I regretted releasing the library almost instantly, or at least as soon as Mike O’Brien suggested a much better name for it on Twitter:

For the uninitiated, a courtesy flush refers to the first, early flush you do in the restroom to avoid stinking the place up.

banner

My library is basically the same thing, except, in a much more hygienic environment. Flushing your HTTP response early also provides everyone a benefit.

All that to say I’ve decided to rename PerfMatters.Flush to CourtesyFlush. I’ve “redirected” the old package to use the new one, so you don’t need to worry if you were using the old library. In addition, I’ve also added support for .NET 4.0 in this release.

PerfMatters.Flush Goes 1.0!

In my previous two performance related posts I’ve gone on and on about the benefits of flushing an HTTP response early and how to do it in ASP.NET MVC. If you haven’t read those yet, I recommend you take a quick moment to at least read Flushing in ASP.NET MVC, and if you have a little extra time go through More Flushing in ASP.NET MVC as well.

I think those posts did a decent job of explaining why you’d want to flush early. In this post I’m going to dig into the details of how to flush early using my library, PerfMatters.Flush.

Three Usage Patterns

The core of what you need to know to use PerfMatters.Flush is that I’ve tried to make it easy to use by providing a few different usage models. Pick the one that works best in your scenario, and feel free to mix and match across your application.

1. Attribute Based

The easiest way to use PerfMatters.Flush is via the [FlushHead] action filter attribute, like this:

[FlushHead(Title = "Index")]
public ActionResult Index()
{
      // Do expensive work here
      return View();
}

The attribute can be used alone for HTML documents with a static <head> section. Optionally, a Title property can be set for specifying a dynamic <title> element, which is very common.

2. Code Based

For more complex scenarios, extension methods are provided which allow you to set ViewData or pass along a view model:

public ActionResult About()
{
     ViewBag.Title = "Dynamic title generated at " + DateTime.Now.ToLocalTime();
     ViewBag.Description = "A meta tag description";
     ViewBag.DnsPrefetchDomain = ConfigurationManager.AppSettings["cdnDomain"];

     this.FlushHead();

     // Do expensive work here
     ViewBag.Message = "Your application description page.";
     return View();
}

As you can see, this mechanism allows for very dynamic <head> sections. In this example you could imagine a <title> element, <meta name="description" content="…"> attribute (for SEO purposes) and <link rel="dns-prefetch" href="…"> (for performance optimization) all being set.

3. Global Lambda

Finally, PerfMatters.Flush offers a model to flush early across all your application’s action methods – which simply leverages the same global action filters that have been in ASP.NET MVC for years now:

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
     filters.Add(new HandleErrorAttribute());
     filters.Add(new FlushHeadAttribute(actionDescriptor =>
         new ViewDataDictionary<CustomObject>(new CustomObject())
         {
             {"Title", "Global"},
             {"Description", "This is the meta description."}
         }));
}

In this case we pass a Func<ActionDescriptor, ViewDataDictionary> to the FlushHeadAttribute constructor. That function is executed for each action method. This example is pretty contrite since the result is deterministic, but you can see both a custom model (CustomObject) and ViewData in use at the same time.

In real world usage the actionDescriptor parameter would be analyzed and leveraged to get data from some data store (hopefully in memory!) or from an IOC container.

Installation & Setup

Getting up and running with PerfMatters.Flush is as easy as installing the NuGet package.

From there, you’ll want to move everything you’d like to flush out of _Layout.cshtml to a new file called _Head.cshtml (which sits in the same directory). Here’s an example of _Head.cshtml:

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0">
     @if (ViewBag.Description != null)
     {
         <meta name="description" content="@ViewBag.Description">
     }
     <title>@ViewBag.Title - My ASP.NET Application</title>
     @Styles.Render("~/Content/css")
     @Scripts.Render("~/bundles/modernizr")
</head>

Here’s its corresponding _Layout.cshtml file:

@Html.FlushHead()
<body>
     <!-- Lots of lovely HTML -->
     <div class="container body-content">
         @RenderBody()
         <hr />
         <footer>
             <p>&copy; @DateTime.Now.Year - My Application</p>
         </footer>
     </div>
     @Scripts.Render("~/bundles/jquery")
     @Scripts.Render("~/bundles/bootstrap")
     @RenderSection("scripts", required: false)
</body>
</html>

Notice the @Html.FlushHead() method on the first line? That’s the magic that stiches this all together. It allows you to use MVC the way your used to on action methods that you don’t want to flush early, and opt-in (via one of the usage model’s described above) when you do.

Wrapping Up

PerfMatters.Flush has been fun for me to work on. It’s currently being used in production on getGlimpse.com and has nicely boosted perceived performance on the pages that do any database queries.

To be honest, I wish that PerfMatters.Flush didn’t have to exist. I’ve asked the ASP.NET team to look into baking first class support for flushing early into the framework, but I don’t foresee that happening for quite awhile. Either way, there are tons of application built in MVC 3, 4 and 5 that can leverage this now.

The project is open source, and the code is hosted on GitHub. I’d love to hear your feedback on ways to make it better and easier to use.

Conference Session Videos Online

The past few weeks I was honored to be accepted at two European developer conferences: Techorama in Belgium and NDC in Norway. Both conferences were amazing, and I’m really hoping the organizers have me back again next year.

Over the span of both conferences I gave four presentations. I received lots of positive feedback about them, which I was really happy about. Most of them were recorded, and their video’s are now available online. Here’s there titles, abstracts, links to slides and any demo code and videos:

nonacat

Introducing Nonacat (Guerilla Hacking an Extra Arm onto GitHub) (Techorama)

GitHub, as instrumental as it is, knows that they cannot possibly offer a one-size-fits-all service that meets the needs to every OSS project. With that in mind, come join Nik Molnar, co-founder of Glimpse, for a session on how to extend GitHub by leveraging their API’s, cutting edge web technologies and free/open source tools to provide users, contributors and project maintainers with a better overall experience.
This session is not about Git itself and is suitable for OSS project maintainers and all users of GitHub.

Techorama did not record their sessions, but there is a slightly outdated recording of this session online from MonkeySpace last year.

mawssecrets

Azure Web Sites Secrets, Exposed! (NDC)

Microsoft’s premier cloud solution for custom web applications, Windows Azure Web Sites, has brought the DevOps movement to millions of developers and revolutionized the way that servers are provisioned and applications deployed.
Included with all the headline functionality are many smaller, less-known or undocumented features that serve to greatly improve developer productivity. Join Microsoft MVP and veteran web developer Nik Molnar for a whirlwind tour of these secret features and enhance your cloud development experience.
This beginner session is suitable for developers both using and curious about WAWS.

Watch the session on Vimeo.

fullstackwebperf

Full Stack Web Performance (Both)

Modern users expect more than ever from web applications. Unfortunately, they are also consuming applications more frequently from low bandwidth and low power devices – which strains developers not only to nail the user experience, but also the application’s performance.
Join Nik Molnar, co-founder of the open source debugging and diagnostics tool Glimpse, for an example-driven look at strategies and techniques for improving the performance of your web application all the way from the browser to the server.
We’ll cover how to use client and server side profiling tools to pinpoint opportunities for improvement, solutions to the most common performance problems, and some suggestions for getting ahead of the curve and actually surpassing user’s expectations.
This session covers a wide array of topics, most of which would be classified within the 200 level.

Watch the session on Vimeo.

If you have any thoughts or feedback on any of the sessions – please leave a comment! I’m always try to make my sessions better.

More HTTP Flushing in ASP.NET MVC

My last post about HTTP flushing in ASP.NET MVC generated quite a bit of buzz and several really good questions. If you haven’t yet, I recommend you read that post before continuing on here. With this post I’ll answer some of the questions that arose about flushing early and offer a few handy tips.

When to Flush?

Steve Souders does a great job of explaining this technique in his book, Even Faster Web Sites, and on his blog, but there still seems to be some confusion about flushing. For example, Khalid Abuhakmeh mentioned that it “seems arbitrary where you would place the flush” and asked:

tweet

While one could certainly flush lots of small chunks of content to the client, and there may be circumstances that call for doing just that, this recommendation is specifically aimed at (1) improving the user’s perceived performance and (2) giving the browser’s parser and network stack a head start to download the assets required to render a page.

Given that, the more content that can be flushed early, before any expensive server side processing (e.g. database queries or service calls), the better. Typically this means flushing just after the </head> tag, or just before the first @RenderSection()/@RenderBody() Razor directive.

Here’s an example of the difference that flushing makes on a waterfall chart displaying network activity:

difference

Notice that in the “Before” image, the two style sheets and one JavaScript file that are referenced in the the <head> section of the page aren’t downloaded until the original HTML file has finished downloading itself. When flushing the <head> section, however, the browser is able to start downloading these secondary resources and rendering any available HTML immediately – providing the user with visual feedback even sooner.

What’s more, if the style sheets contain rules that reference tertiary resources (e.g. background images, sprites or fonts) and the flushed HTML matches one of those rules, the tertiary resources will be downloaded early as well. Ilya Grigorik, a developer advocate that works at Google on the Chrome web performance team, recently wrote an a post about font performance with the tip to optimize the critical rendering path – which flushing directly helps with.

So basically, it’s best to flush as much of the beginning of an HTML response as you can to improve perceived performance and give the browser a head start on downloading not only secondary assets, but often times tertiary ones as well.

How can I See the Improvement?

The effects of flushing early is most easily seen on a waterfall chart. (New to waterfall charts? Check out Tammy Everts’ excellent Waterfalls 101 post.) But how does that correlate to the user’s experience? That’s where Speed Index comes in. Speed Index is currently the best metric we have to measure the perceived performance of a page load. WebPageTest.org measures Speed Index for any given page by doing a series of screen captures of the loading process, analyzing the images over time, and producing the metric.

logo_wpt

compare_progress
Screen captures of a loading page

chart-line-small
Speed Index comparison of two loads

The WebPageTest documentation covers the algorithm for calculating the metric in depth, and offers suggested targets based on the Alexa Top 300K. The lower a site’s Speed Index, the better.

Personally I’ve never seen flushing early increase a page’s Speed Index. In general, I can’t think of a way that it would hurt performance, but you may not need to use the technique if you don’t have anything in the head to download or any server side processing. As always, your mileage may vary and you should test the results on your own site.

What About Compression?

You should still compress your responses, flushing doesn’t change that recommendation! Compression affects the content that is sent to the client (and adds a Content-Encoding: gzip header).

Flushing, on the other hand, affects how the content is sent to the client (and adds a Transfer-Encoding: chunked header.)

headersHTTP Response with Compression and Flushing in Fiddler

The two options are completely compatible with each other. Souders reported some configuration issues with Apache flushing small chunks of content – but, based on my testing,  IIS doesn’t seem to have these problems. Using the default configuration, IIS compresses each chunk of flushed content and sends it to the client immediately.

How do I Debug?

Flushed HTTP responses are really no different that any other HTTP response. That means the in browser F12 development tools and HTTP proxies like Fiddler work as expected.

One caveat worth mentioning though is that Fiddler, by default, buffers text\html responses, which means that when Fiddler is running the benefits of flushing early won’t be observable in the browser. It’s easy enough to fix this though, simply click the “Stream” icon in the Fiddler toolbar, as covered in the Fiddler documentation.

streamIn Stream mode Fiddler immediately releases response chunks to the client

Are There Any Other Considerations?

Since flushing early is done in the name of giving the browser a head start, be sure to provide the browser’s parser everything it needs to efficiently read the HTML document as soon as possible.

Eric Lawrence, ex-product manager for Internet Explorer, has a post detailing the best practices to reduce the amount of work that the parser has to do to understand a web page. Essentially, begin HTML documents like this and the parser will not have to wait or backtrack:

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8">
     <meta http-equiv="X-UA-Compatible" content="IE=edge">
     <base /><!-- Optional -->
     <title>...</title>
     <!-- Everything else -->

What’s the Easiest Way to do This?

I’ve created a simple open source project called PerfMatters.Flush that attempts to make flushing early in ASP.NET MVC as simple as possible. It’s still very alpha-y, but it’s already live on the getGlimpse.com homepage. You can play with the bits now, or wait for my follow up post that details how to use it.

Flushing in ASP.NET MVC

I’ve written a follow up to this post that answers many of the common questions about flushing early. Be sure to check it out.

The Setting

Before the beginning of this decade, Steve Souders released two seminal books on the topic of web performance: High Performance Web Sites and Even Faster Web Sites. The findings and subsequent suggestions that came out of those books changed the face of web development and have been codified into several performance analysis tools including Yahoo YSlow and Google PageSpeed.

High Performance Web Sites Even Faster Web Sites

Most professional web developers that I’ve met over the past five years are familiar with Souder’s recommendations and how to implement them in ASP.NET MVC. To be fair, they aren’t that difficult:

  • HTTP Caching and Content Compression can both be enabled simply via a few settings in web.config.
  • Layout pages make it easy to put stylesheets at the top of a page and scripts at the bottom in a consistent manner.
  • The Microsoft.AspNet.Web.Optimization NuGet package simplifies the ability to combine and minify assets.
  • And so on, and so forth…

The recommendation to “Flush the Buffer Early” (covered in depth in chapter 12 of Even Faster Web Sites), however, is not so easy to implement in ASP.NET MVC. Here’s an explanation of the recommendation from Steve’s 2009 blog post:

Flushing is when the server sends the initial part of the HTML document to the client before the entire response is ready. All major browsers start parsing the partial response. When done correctly, flushing results in a page that loads and feels faster. The key is choosing the right point at which to flush the partial HTML document response. The flush should occur before the expensive parts of the back end work, such as database queries and web service calls. But the flush should occur after the initial response has enough content to keep the browser busy. The part of the HTML document that is flushed should contain some resources as well as some visible content. If resources (e.g., stylesheets, external scripts, and images) are included, the browser gets an early start on its download work. If some visible content is included, the user receives feedback sooner that the page is loading.

A few months ago Steve revisited this guidance and provided examples of the difference that flushing can make to an application’s performance. His post inspired me to try the same on an ASP.NET MVC project that I’m working on, which led to this post.

The Conflict

Since .NET 1.1, ASP.NET has provided a mechanism to flush a response stream to the client with a simple call to HttpResponse.Flush(). This works quite well when you are incrementally building up a response, but the architecture of MVC, with its use of the command pattern, doesn’t really allow for this. (At least not in a clean manner.) Adding a call to Flush() inside a view doesn’t do much good.

@{
     Response.Flush();
}

This is because MVC doesn’t execute the code in the view until all the other work in the controller has completed – essentially the opposite of what the czar of web performance recommends.

The Climax

Because I’m a believer that #PerfMatters, I decided to take matters into my own hands to see if I could do anything better.

First, I realized that I could can get around a few issues by leveraging partial results, manually executing and flushing them, like so:

public ActionResult FlushDemo()
{
       PartialView("Head").ExecuteResult(ControllerContext);
       Response.Flush();

       // do other work
       return PartialView("RemainderOfPage");
}

I think that looks pretty ugly, so I’ve taken things a step farther and removed the cruft around executing the result by creating a base controller with its own Flush() method:

public class BaseController : Controller
{
     public void Flush(ActionResult result)
     {
         result.ExecuteResult(ControllerContext);
         Response.Flush();
     }
}

I think my new Flush() method clarifies the intent in the action method:

public ActionResult FlushDemo()
{
     Flush(PartialView("Head"));

     // do other work
     return PartialView("RemainderOfPage");
}

What I’d really like to be able to do is leverage the yield keyword. Yield seems like the natural language and syntax to express this. I was able to cobble together this example:

public IEnumerable<ActionResult> Test()
{
      yield return PartialView("Header");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Lorem");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Vestibulum");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Sed");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Footer");
}

I got that working with a pretty ugly hack, but leveraging the yield keyword and IEnumerable<ActionResult> like this should theoretically be possible by making a few changes to MVC’s default action invoker, no hack necessary.  Unfortunately, C# throws a few curve balls at this since you can’t combine the usage of yield with async/await – which I think would be a pretty common usage scenario.

The Resolution?

It looks like splitting up a layout file into multiple partial views and using my Flush() helper method is the best that we can do right now.

Unfortunately,

  • Yielding multiple views isn’t currently supported by MVC, and even if it was it would be incompatible with the current C# compiler.
  • Partial views are the best we can get to break up a response into multiple segments – which is painful. It’d be nice if I could ask Razor to render just a section of a layout page, which would reduce the need for partial views.

I’m hoping that the ASP.NET team, or my readers, can come up with a better way to handle flushing in the future, but I wanted to at least walk you through what your options are today.

For those interested, I might pull together some of this thinking into a NuGet package that leverages action filter attributes to simplify the syntax even further. If you like the sounds of that, encourage me on Twitter: @nikmd23

Packaging Source Code With NuGet

.NET developers depend on binaries a lot. Assemblies and executables are our typical unit of deployment and consumption and NuGet itself is basically a sophisticated binary distribution system.

I don’t think there is anything necessarily wrong with that. Assemblies offer a lot of power and have plenty of benefits, but a few times over the last year or so I’ve wondered if our assembly usage follows the law of the instrument.

hammerI suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail. - Abraham Maslow

Plenty of other development communities very happily exist in a world without assemblies or binaries. The Ruby gems and npm repositories are absolutely filled to the brim with packages that contain no binary files whatsoever. What do they contain? Source code.

Recently I’ve been wanting to create a NuGet package that would contain common AutoFixture configuration for intrinsic ASP.NET types that I find myself repeatedly using. Unfortunately, every other user of the package would want to slightly tweak the configuration for their project, so I figured I’d simply ship the package with only the source code (no pre-compiled assembly) so that they could easily do that.

When I told Louis DeJardin about this idea earlier in the year at MonkeySpace, his eyes lit up and he got excited. It turns out he’s been thinking about source only packages quite a bit and has even released several OWIN packages as source only.

Louis told me about a few best practices and conventions that he has found useful when creating source only packages. I’ve noticed that only Damian Hickey has picked up on these, so with Louis’ permission, I’ve decided to publish them here for greater exposure:

NuGet Source-Only Package Conventions and Practices

  1. Package ID’s should end in .Sources, which aligns well with and builds upon the .Sample suffix recommended in the NuGet Package Conventions documentation.
  2. In the .nupkg itself, source code files should be placed in a directory with the path content/App_Packages/{package-id}.{package-version}. Similar to how the .Sources suffix builds upon the prior art of .Samples, the App_Packages directory follows the App_Start folder nomenclature used by David Ebbo’s popular WebActivator package.

    Here’s an example of a package following this convention, as seen in NuGet Package Explorer:
    path
    This convention also allows for a very convenient upgrade path which I’ll cover later on.

  3. Source-Only packages can depend on any other package, including other Source-Only packages.
  4. Source-Only packages may leverage NuGet’s Source Code Transformations (*.pp files) to inject project properties into the source. This is most often seen with the use of the $rootNamespace$ property, but any project property can be used.
  5. In some situations, it may be useful to also ship a standard “binary” package in addition to the Source-Only package.
  6. Types in Source-Only packages should be marked as internal by default in order to not pollute the target code’s public API.
  7. Consider using conditional compilation symbols or partial classes in Source-Only packages to provide flexibility and the ability to customize the source to users.

    Examples of this technique include allowing for multiple platform targeting options and changing the accessibility of types to public when desired. SimpleJson has a few good examples of this in their source code.

When to Create Source-Only Packages

There are plenty of situations where a Source-Only package does not make sense. Here’s a few things to consider:

  • DO consider creating a Source-Only package for “utility” libraries that feature heavy usage of static and/or extension methods. Examples of these types of utility libraries include unit test assertion libraries (see note below!) and the popular DataAnnotationsExtensions package.
  • DO consider creating a Source-Only package for small single purpose libraries. SimpleJson is already doing this (though not following these conventions) but you can image any code appropriate for a blog post or Gist would fit the definition well. (Aside: a Gist to Source-Only NuGet Package service would be pretty useful!)
  • DO consider creating a Source-Only package for common configuration and setup code or any code which will require tweaking by the user.
  • DO NOT consider creating a Source-Only package as a means to simply make step-debugging easier. Instead leverage a symbols package.

Source-Only Package Update Process

One of the nice things about assemblies is that the process of versioning them is well understood. How do we version and update source-only NuGet packages? Luckily, NuGet’s native handling of content works in our favor. Let’s explore with an example:

I’ve created a source-only package called Reverse.Sources that contains an extension method for reversing a string. Let’s install it:

Install-Package Reverse.Sources -Version 1.0.0

install01

Great, now our project contains that StringExtensions.cs file with an implementation to reverse strings. It compiles right along with our application and is relatively out of our way.

Unfortunately, version 1.0.0 of my package had a bug and blows up if a null string is passed into the Reverse method. I’ve added a simple guard clause to fix the problem and released it in version 1.1.0. Let’s update:

Update-Package Reverse.Sources

install02

Notice Solution Explorer looks nearly identical – Reverse.Sources.1.0.0 was simply replaced, along with all of it’s files, by Reverse.Sources.1.1.0. I’ve updated my project without any troubles and I have that nice bug fix now.

But what if we had made changes to StringExtensions.cs? NuGet would have simply left behind the files you’ve edited.

install03

We’d know that there was a problem too because the compiler would complain with a “The namespace ‘Sources_Example’ already contains a definition for ‘StringExtensions’” error.

To fix that error we can use a text diff/merge tool to move the changes over and delete the old 1.0.0 folder.

To me this is a pretty clean upgrade process. Sure we could sometimes get into a situation where we have to deal with merging, but we also get the benefits of working directly with the code.

Are Source-Only Packages A Bad Idea?

Perhaps yes, perhaps no. I don’t think they are that crazy of an idea though. Large numbers of developers outside the .NET ecosystem already work almost exclusively with source only packages. Further, I’d propose that you do as well if you’ve ever included jQuery, Modernizr or any other JavaScript NuGet package in your project.

I for one probably wouldn’t want all packages to be source-only (we certainly won’t be shipping Glimpse like this!), but there are situations where I think it could be extremely useful – particularly in scenarios involving extensibility/tweaking and reducing dependency overhead when I might have reached for ILMerge otherwise.

I’m hoping that this post can start a bit of a conversation about the idea of source only packages and increase the community’s comfort with them. I’m interested to hear the thoughts of the NuGet team and will be submitting these conventions to the NuGet documentation shortly.


Note: It looks like this is on Brad Wilson’s mind too!


Updated Oct 25th with feedback from Phil Haack, Louis DeJardin, Prabir Shrestha and Andrew Nurse. Thanks for the feedback guys!

Enjoy the Perks of Open Source Development

I’ve been working on Glimpse for a little over two and a half years now. Helping the community and collaborating with brilliant developers worldwide is really rewarding. It’s certainly the biggest benefit of working on open source software.

There are plenty of other smaller perks though. One of my favorites is the availability of all types of free world class tooling and services that assist in the efforts of any given project. I use several of them, for instance:

When I think about all the sponsorships I’ve received, I start to feel a bit like this guy who is completely hooked up by companies who are willing to help him do what he does best:

driver

I’m very thankful for all of the support I’ve received. Neither my projects nor I would be in the position we are in without the help. But here’s the secret: I’m not special. Licenses and freebees are available to most open source projects, you just have to know where to ask for them. Unfortunately, figuring that out can be intimidating, which is why I decided to launch ossPerks.com.

logo

OSS Perks is a crowd sourced listing of all the products and services that are made freely available to open source projects. It started in public beta a few months ago with just a listing of the perks that I knew about. Since then the list has more than tripled via contributions from several different people and a Twitter account has been created so that developers can follow along as perks are added to the list.

The site is open source in and of itself, so you can easily add a new perk that you’ve found. I’m leveraging GitHub Pages and Jekyll to maintain the list so it’s very easy to contribute to directly from your browser by just editing our YAML file. (YAML is a plain text data format popular in the Ruby community, think of it as the offspring between INI file syntax and JSON.)

If you’d like to contribute but this all sounds a bit foreign to you, I’ve got a quick little tutorial right on the repositories readme file that should be able to get you up and running.

So go take a look at the available perks and take full advantage of the support offered to you for your open source projects. Trust me, it will make you feel like a world class athlete (or race car driver Winking smile)!

PS – Fellow Microsoft MVP Xavier Decoster has forked the code for OSSPerks and created MVPPerks, a similar listing of the perks available to MVPs!

Testing MVC View Names with [CallerMemberName]

We’ve all written ASP.NET MVC controller actions that simply return a view. They tend to look just like this:

public ActionResult Index()
{
   
return View(); }

It’s also fairly common to test those actions to ensure that the proper view is being returned:

public void Should_Return_Index_View(HomeController sut)
{
    
var
actionResult = sut.Index();
    
var viewResult = actionResult as ViewResult
;
   

Assert
.Equal("Index", viewResult.ViewName); }

If you’ve tried this before you know that the above test will fail. Why? Because viewResult.ViewName actually has a value of string.Empty, not Index. MVC doesn’t mind this, because if a view’s name is empty, it simply defaults to use the value of {action} from the route data collection. That unfortunately makes it difficult to ensure the proper view is being returned in tests.

We could re-work our action method to explicitly specify the name of the view returned (return View(“Index”)), but I really like the simplicity of having a default convention for view names. Instead I’ve been leveraging C# 5.0’s new caller information attributes to retain the simplicity of the convention, but still have the view name explicitly set for testing purposes.

To do this I’ve added the following two methods to the base controller that all of my project’s controllers inherit from:

public new ActionResult View([CallerMemberName] string actionMethod = null)
{
    
return base
.View(actionMethod); } public ActionResult View(object model, [CallerMemberName] string actionMethod = null) {
    
return base.View(actionMethod, model); }

Now, when I re-execute the test, it passes. That’s because the actionMethod parameter is automatically set by the compiler to be the name of the calling method, which in this case is Index. I can still pass in an overriding view name whenever I want, and the compiler will just get out of the way. Nifty!

NOTE: I will point out that you may want to consider if you even need to test the name of a ViewResult to begin with. Assuming you do though, I find this technique to be much cleaner than trying to mock out a whole request just to see what the {action} is.

Post Navigation

Follow

Get every new post delivered to your Inbox.