nik codes

301 Moved Permanently, Location: Microsoft

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

A few weeks ago, I began a new gig as a Program Manager on the Cross Platform and Open Tooling team at Microsoft!

Microsoft

Needless to say, I’m very excited about this opportunity. Let me tell you why:

  1. It’s a very exciting time to join Microsoft. The entire web stack is being re-done. There’s a new browser in Edge, new cross-platform web framework in ASP.NET 5 and Azure, which is technically the “oldest hat”, is still under heavy development and revs every six weeks. As a lover of the web, I can’t think of a better company to be working for right now than Microsoft.
  2. I’m working with a cutting edge group that embodies, perhaps more than any other team, what the “new Microsoft” is all about. Case in point: After 18+ years of successfully avoiding Apple and Mac computers, I’ve been issued a Mac Book Pro as my work machine. I’m still getting used to the OSX way of doing things, but I’m very excited to see how the other side lives. Need more proof that this team is doing something different? Well, they released Visual Studio Code, a cross platform text editor, just a few weeks ago – and that’s pretty cool.
  3. I’ll be working with Anthony, the co-founder of Glimpse. We’ve worked together for nearly five years now, across three different companies. If the past is any indication, we’ll be able to do great things together at Microsoft.
  4. My family and I get to stay put in New York city. Remote work is nothing new to me, and I’m happy that I don’t have to disrupt my family’s life to take this opportunity.

I’d also like to take a moment to thank Red Gate for the past three years. Working for them has been a joy. Not only did I get to work directly with the community to bring Glimpse to new heights, but I got to see the world speaking at tons of conferences and made many great new friends. I’ll certainly be staying in touch with everyone there and I wish them the best of luck as our paths take us in separate directions.

I’ll continue to periodically post on this blog as well as tweet @nikmd23.

What Military Strategy Teaches Us About Web Performance

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

In December of 1974, Communications of the ACM magazine published Computer Programing as an Art by Donald Knuth. In it, Knuth wrote these now famous words:
Comunications of the ACM, December1974

The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil in programming.

A couple decades earlier, at the conclusion of the Korean War, US Air Force fighter pilot Colonel John Boyd, graduated at the top of his class from the prestigious Fighter Weapons School in Nevada. Shortly thereafter he became an instructor at the school revolutionizing aerial tactics and developing the concept of the OODA loop.

ooda

The OODA loop is a reoccurring decision making cycle that, while originally intended for use by fighter pilots, has now been adopted in many fields including litigation and business. Boyd broke the loop into four phases:

  1. Observe: Collect data from the surrounding environment
  2. Orient: Analyze and synthesize the data to form a mental model
  3. Decide: Determine a course of action based on the mental model
  4. Act: Physically act on or implement the decision

The OODA loop begins with Observation, and ends in Action, but at any point in the cycle, if the situation changes or you do not have a clear picture, you can reset and go directly back to Observation. The more salient point though, is that Action is never reached without first Observing, Orienting and Deciding.

When considering web performance, following the OODA loop ensures that developers avoid the evil, premature optimizations that Knuth warns about. It is very difficult to optimize in the wrong place or at the wrong time if you’ve properly observed, oriented and decided first. Let’s consider how the OODA loop applies to web performance in practice:

  1. Observe
    Leverage web analytics and real user monitoring (RUM) techniques to gather information about page usage and performance. There are plenty of web analytics libraries available that are easy to setup and use. I’ve had great success with Google Analytics in the past. For RUM data, the WC3 has authored several specifications (Navigation Timing, Resource Timing, User Timing, etc.) that have been well adopted by browser vendors and provide most of the data required.
  2. Orient
    With usage and performance data gathered, leverage simple statistics and charting tools to build up a mental model that’s easy to reason about. Median values, time series charts and histograms are commonly used and easy to generate, even using software as common as Excel.
  3. Decide
    Identify popular, yet under performing functionality within the application and decide how to improve it with carefully calibrated synthetic tests. Synthetic testing systems, like the perennial favorite WebPageTest, use highly instrumented browsers to provide deep insight into the loading and performance of web apps. This insight allows developers to confidently decide how to best improve the performance of their targeted functionality.
  4. Act
    Update the code, commit it, deploy it, then take the most important step in the cycle.
  5. Repeat

Are you worried about where to focus your performance optimization efforts? Do you know if right now is the right time and right place? Build confidence, assurance and keep yourself on the rails with the OODA loop.

PS-AuthorBadge-Medium

If some of the terms I used in this post, like real user monitoring, synthetic testing and histograms, throw you for a loop, be sure to check out my Tracking Real World Web Performance Pluralsight course where I cover these concepts in detail, with examples and demos of how to automate the steps of the OODA loop.

Something Razor Should Learn From JSX

Alright, here’s the thing: I often find the interaction model between Razor Views and ViewModels in ASP.NET MVC to be clumsy and annoying. Recently I’ve searched for ways to make it better, and there don’t seem to be any. This post highlights how I came to the fore mentioned conclusion, a listing of what I’ve tried to make things better, and a proposal for how to improve Razor.

Champion readers of nikcodes.com, please take a gander at what I’m thinking, I’d love to hear your thoughts on all this.

Declare & Proceed

As I’ve learned more about React, Facebook’s JavaScript templating library, my thoughts on coupling Views and ViewModels has been challenged. (For the unfamiliar, React solves many of the same problems as Knockout, but is more similar in style to Razor than Knockout.)

react-opti

To create Views, React (optionally) uses a somewhat contentious super-set of JavaScript they call JSX. JSX mixes declarative XML and HTML literals directly into procedural JavaScript code. Take this simple JSX “Hello World” example for instance:

var HelloMessage = React.createClass({
    render: function() {
        return <div>Hello {this.props.name}</div>;

    }
});
 
React.render(<HelloMessage name="Nik" />, mountNode);

Notice the <div> and <HelloMessage> tags sprinkled into that JavaScript? The argument is that it’s easier to compose Views in a declarative language like XML, so why not provide that power in the same place as the UI logic? Of course, JSX isn’t something a browser natively understands so this all gets “compiled” into something “native”, in this case, standard, run-of-the-mill JavaScript:

var HelloMessage = React.createClass({
    displayName: "HelloMessage",
    render: function () {
        return React.createElement("div"null"Hello "this.props.name);
    }
});
 
React.render(React.createElement(HelloMessage, { name: "Nik" }), mountNode);

The ability to mix declarative and imperative statements together in a single language is nothing new. VB.NET has a very similar feature called XML Literals as well as LINQ (for querying data structures, which C# also has). Heck, the next version of JavaScript, ES6, has support for Template Literals baked right in as well!

What About the Server Side?

React, and front-end JavaScript developers in general, aren’t the only web developers that leverage Views, ViewModels and an MV* architecture though. ASP.NET MVC developers have been using Razor to mix HTML with C# for years now. To be fair, the balance in Razor Views tends to lean more towards the declarative with little “nuggets” of C# logic sprinkled throughout. For example, see this little HelloMessage.cshtml, Razor template:

@model Person
 
<div>Hello @Model.Name</div>

Like the JSX file, this Razor template gets “compiled” into something “native” – C# in this (abridged) case:

[PageVirtualPathAttribute("~/Views/Home/HelloMessage.cshtml")]
public partial class HelloMessage : WebViewPage<Person>
{
    public override void Execute()
    {
        WriteLiteral("<div>Hello ");
        Write(Model.Name);
        WriteLiteral("</div>");
 
    }
}

So let’s stop for a moment and compare JSX to Razor:

JSX Razor
Front-End (Mostly) Back-End (Mostly)
Imperative code (JavaScript) with declarative XML/HTML mixed in. Declarative code (HTML) with imperative C# or VB mixed in.
UI composition, logic and ViewModel contained in same file. UI composition and logic contained in same file.
ViewModel defined in separate class file.
Compiles to native format: JavaScript Compiles to native format: C# or VB

You’ll note that for the most part, JSX is pretty similar to Razor, with a glaring difference about where the ViewModel is defined. I never perceived this to be an issue, until I saw JSX do it a better way.

Pulling Everything Together

Managing ViewModels in Razor and ASP.NET MVC isn’t necessarily difficult, but it certainly isn’t as easy as it is in the React/JSX world. Over time, ASP.NET MVC developers have established well trodden best practices to help manage this disconnect, including:

  1. ViewModels exist as separate classes (as opposed to using the ViewData dictionary).
  2. Each View gets its own explicit ViewModel.
  3. Naming conventions are used to help locate a ViewModel for any given View.

Following these practices mean that, for the most part, ViewModels and Views basically have a one to one relationship and are logically paired together. I tend to believe that if two things go together logically, then it sure would be nice if they go together physically as well.

Unfortunately, Razor does not allow us to declare a nested class inside the View to use as the ViewModel. In fact, it doesn’t allow us to declare a class at all. I know this because this weekend I tried several different ways to make it work. I even tried to leverage the powerful Razor Generator to get something that would pass as co-located Views and ViewModels.

If any reader can come up with a better way to couple the physical location of a View and its ViewModel together, I’d love to hear it in the comments. To me, the nicest solution would be if ViewModel’s could be declared inline with Razor’s @model syntax. Here’s HelloMessage.cshtml again, updated with my proposal:

@model 
{
    public string Name { getset; }
    public int OtherProperty { getset; }
}

<div>Hello @Model.Name</div>

Note, I’m no longer using the @model syntax as it was intended. Typically @model is followed with a type, like “@model Person“. In this proposal, I go directly into declaring what might be thought of as an anonymous class. I could then use that class inside my controller like this:

public ActionResult HelloMessage() 
{
    var model = new HelloMessage.Model { Name = "Nik" };
    return View(model);
}

I think there would be a lot less ceremony in Razor and ASP.NET MVC if it could physically couple Views and ViewModels together, at least in the common use case where that makes sense. If you prefer to keep them separate, or you have a reusable ViewModel, that’s fine too; just with with the familiar “@model {Type}” syntax.

Do you agree? Would you like to see something like this added to Razor? Do you see a way to improve this design, or are there any problems with it? Please let me know!

Squeezing the Most Into the New W3C Beacon API

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

The Setup

It’s common for many websites to build a signaling mechanism that, without user action, sends analytics or diagnostics information back to a server for further analysis. I’ve created one at least a half a dozen times to capture all sorts of information: JavaScript errors, browser and device capabilities, client side click paths, the list goes on and on. In fact, the list is actually getting longer with the W3C’s Web Performance Working Group cranking out lots of great Real User Metrics (RUM) specifications for in-browser performance diagnostics like Navigation Timing, Resource Timing, User Timing and the forthcoming Navigation Error Logging and Frame Timing.

The signaling code, often called a beacon, has traditionally been implemented in many different ways:

  • A JavaScript based timer which regularly and repeatedly fires AJAX requests to submit the latest data gathered.
  • Writing special cookies that become attached to the next “natural” request the browser makes and special server side processing code. 
  • Synchronous requests made during unload. (Browsers usually ignore asynchronous requests made during unload, so they can’t be trusted.)
  • Tracking “pixels”; small non-AJAX requests with information encoded into URL parameters.
  • 3rd party solutions, like Google Analytics, which internally leverage one of the options listed above.

Unfortunately, each of these techniques have downsides. Either the amount of data that can be transferred is severely limited, or the act of sending it has negative affects on performance. We need a better way, and that’s where the W3C’s new Beacon API comes into play.

The Solution

With the new Beacon API, data can be posted to the server during the browsers unload event, without blocking the browser, in a performant manner. The code is rather simple and works as expected:

window.addEventListener('unload', function () {
      var rum = {
              navigation: performance.timing,
              resources: performance.getEntriesByType('resource'),
              marks: performance.getEntriesByType('mark'),
              measures: performance.getEntriesByType('measure')
          };
      rum = reduce(rum);
      navigator.sendBeacon('/rum/submit', JSON.stringify(rum));
}, false);

The Catch

Unfortunately, as of this writing, the Beacon API is not as widely supported as you’d hope. Chrome 39+, Firefox 31+ and Opera 26+ currently support the API. It isn’t supported in Safari and the Internet Explorer team has it listed as “Under Consideration”.

The other catch, and this is the biggie to me, stems from this note about navigator.sendBeacon() in the spec:

If the User Agent limits the amount of data that can be queued to be sent using this API and the size of data causes that limit to be exceeded, this method returns false.

The specification allows the browser to refuse to send the beacon data (thus returning false) if it deems you’re trying to send too much. At this point, Chrome is the only browser that limits the amount of data that can be sent. Its limit is set at right around 16KB 64 KB (65,536 bytes exactly).

A Workaround?

To be fair, 16KB 64KB sure seems like a lot of data, and it is, but I’ve found myself in the situation where I was unable to beacon back diagnostics information on heavy pages because they had just too much Resource Timing data to send. Being unable to send diagnostics data on the worst performing pages really misses the point of the working group’s charter. Further, this problem will only get worse as more diagnostics information becomes available via all the RUM specifications I mentioned at the top of this post. That said, I’ve implemented several ways to reduce a beacon’s payload size without actually losing or giving up any data:

1. Use DOMString over FormData

The Beacon API allows you to submit four data types: ArrayBufferView, Blob, DOMString or FormData. Given that we want to submit RUM data, FormData and DOMString are the only two we can use. (ArrayBufferView and Blob are for working with arrays of typed numeric data and raw file-like objects.)

FormData seems like a natural way to go, particularly because model binding engines in frameworks like ASP.NET MVC and Rails work directly with them. However, you’ll save a few bytes by using a DOMString and accessing the request body directly on the server.

For simplicity in both encoding and parsing, I encode the data via JSON. (Though you could try a more exotic format for larger gains.) On the server, with JSON.NET you can parse the request body directly like this:

var serializer = new JsonSerializer();
Rum rum;
using (var sr = new StreamReader(Request.InputStream))
using (var tr = new JsonTextReader(sr))
{
     rum = serializer.Deserialize<Rum>(tr);
}

2. Make Fewer HTTP Requests

My beacon payload size issues arose on pages that had lots of resources (images, scripts, stylesheets, etc) to download, which yielded very large arrays of Resource Timing objects. Reducing the number of HTTP requests that the page was making (by combing scripts and stylesheets and using image sprites) not only helps with page performance, but also reduced the amount of data provided by the Resource Timing API which in turn reduces beacon payload sizes.

3. Use Positional Values

As mentioned above, The Resource Timing API yields an array of objects. The User Timing API does the same thing. The problem with JSON encoding arrays of objects is that all the keys for each key/value pair is repeated over and over again for each array item. This repetition adds up quite quickly.

Instead, I use a simpler array of arrays structure in which individual values are referenced by position. Here’s the JavaScript to convert from a User Timing API array of objects to an array of arrays:

// convert to [name, duration, startTime]
rum.marks = rum.marks.map(function (e) { 
     return [e.name, e.duration, e.startTime]; 
});
 
// convert to [name, duration, startTime] 
rum.measures = rum.measures.map(function (e) { 
     return [e.name, e.duration, e.startTime]; 
});

On the server I use a custom JSON.NET converter to parse the positional values:

public class UserTimingConverter : JsonConverter
{
     public override object ReadJson(JsonReader reader, 
                                     Type objectType, 
                                     object existingValue, 
                                     JsonSerializer serializer)
     {
         var array = JArray.Load(reader);
         return new UserTiming
         {
             Name = array[0].ToString(),
             Duration = array[1].ToObject<double>(),
             StartTime = array[2].ToObject<double>()
         };
     }
     // ...
}

4. Derive Data on Client

Depending on the requirements, it may be feasible to send fewer values by making some simple derivations on the client. Why send both domainLookupEnd and domainLookupStart if all that’s required is subtracting one from the other to get the domainLookupTime? The more that’s derived on client, the less raw data to send across the wire.

5. Shorten URL’s

Resource Timing data, in particular, contains a lot of often redundant URL strings. There’s many strategies to reduce URL redundancy:

  1. If all the data is being served from the same host, strip the domain and scheme from the URL entirely. (Basically make it a relative URL.) For example: http://domain.com/content/images/logo.png becomes /content/images/logo.png
  2. Shorten common segments into “macros” of limited characters that can be re-expand later. e.g.: /content/images/logo.png becomes /{ci}/logo.png
  3. The folks at Akami, who gather tons of Resource Timing data, leverage a tree like structure to reduce redundancy even more. They structure their payload like this:
    {
         "http://": {
             "domain.com/": {
                 "content/style.css": [ /* array of values */ ],
                 "content/images/": {
                     "logo.png": [ /* array of values */ ],
                     "sprite.png": [ /* array of values */ ]
                 }
             }
         }
    }

6. Leverage HTTP Headers

Not all data needs to be included in the beacon payload itself. The server can still gather some diagnostics information from the standard HTTP headers from the beacon’s request. These include things like:

  • Referrer
  • UserAgent for browser and device information
  • Application specific user data from cookies
  • Environment specific data via X-Forwarded-For and other similar headers
  • IP Address, and thus approximate geographical location (not technically a header)
  • Date and Time (also not a header, but calculated easily on server)

With this collection of techniques, you should be able to squeeze a little more out of the Beacon API. If you’ve found another way to shave off a few bytes, let me know in the comments.

Developer Arcade

During the holiday season I find myself reflecting over the previous year and celebrating various traditions with family and friends alike.

One of my fondest Christmas memories was back in ‘96 when my brother and I got a Nintendo 64. I remember the two of us playing Wave Race 64 together in his room all night long.

Because of this memory I tend to think about video games, which I don’t typically play, in the lead up to Christmas. I often have a desire to unwind playing them, just a little bit, in an attempt to relive those simpler days.

arcade

So this year, I began looking for games that I could play without any guilt. I looked for games that would make me a better software developer, and I found a ton! So if you find yourself bored for a few hours over the holiday break, why not try one of these games out and improve your skills?

Txt Based Games

  • Typing.io: Improve your touch typing skills with this simple little game made specifically for programmers who have to type a lot more curly braces and angle brackets than the standard Mavis Beacon player.
  • VIM Adventures: Keep your fingers moving in this Zelda-like overhead scroller. You’ll need to learn VIM commands, motions and operators to control your character to victory.
  • Terminus: Learn to navigate your way around the Unix shell with common commands in this Zork like text adventure.

Design Eye

  • What The Color?: How well do you understand the #hex, rbg() and hsl() color space used in CSS? This game from Lea Verou times how long it takes for you to guess a particular color. Hint: HSL is much easier to navigate once you get used to it!
  • RGB Challenge: Basically What The Color in reverse. You are given an RGB value and three colors to choose from. Selecting the proper color is more difficult that it should be. My high score is 7.
  • What px?: Very similar to What The Color, What px? challenges you with CSS length units by guessing the width of various elements.
  • Pixact.ly: Another CSS lengths game, Pixact.ly asks you to draw boxes of various sizes.
  • The Bezier Game: Learn how to use the Pen tool, common in SVG/vector editing software, in this advanced connect-the-dots style game.
  • Hex Invaders: (Added June 11, 15) A “more fun” version of RGB Challenge with gameplay mechanics similar to Space Invaders.

Web Development

  • HTML5 Element Quiz: Quite possibly my favorite of the games, this quiz challenges you to name as many HTML5 elements as you can in five minutes. As a life long web dev, I was surprised how many I wasn’t able to remember – but even better, I learned about many new ones I’ve never used before.
  • CSS Diner: I’m also a big fan of this game. CSS Diner teaches you CSS3 query selector syntax by graphically having you select items from a bento box that have been arranged on a dining room table.
  • XSS: Learn to think like a hacker with this security game from Google. In each level you are presented with an increasing “secure” web page, as well as tips to figure out how to exploit the page with some cross site scripting. This game is really eye opening!

Languages

  • CodeCombat: A very complete video game, with music, sound effects, animations and graphical styling’s, CodeCombat will teach you Python, JavaScript, CoffeeScript, Clojure or Lua by getting you to write simple little programs that lead your character through each level. Think of it as Logo with modern day bells and whistles. This one seems perfect for the kids in your life.
  • Regex Golf: Presented with both a whitelist and a blacklist of strings, can you write an expression that matches all of the whitelist, and none of the blacklist? Each level teaches you a little bit more about Regex operators.
  • Learn Some SQL: Created by a few co-workers of mine at Red Gate, Learn Some SQL shows you a table of data and you have to write the SQL statement that would select the same dataset.

Miscellaneous

  • Learn Git Branching: Very similar in style to Learn Some SQL, this game shows you a diagram of a Git repository, and you have to branch, merge, rebase and commit your repository into the matching shape. I’ve shown this game to people learning Git and it really helped them.

I hope you enjoy one or two of these games as well as your holiday time. Post your high scores in the comments below – along with any games that are missing from the list.

Happy Holidays!

PerfMatters.Flush now CourtesyFlush

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

Its been nearly a month since I released PerfMatters.Flush on NuGet. The library has been getting people talking about performance and thinking how to improve their web applications.

Unfortunately for me, I regretted releasing the library almost instantly, or at least as soon as Mike O’Brien suggested a much better name for it on Twitter:

For the uninitiated, a courtesy flush refers to the first, early flush you do in the restroom to avoid stinking the place up.

banner

My library is basically the same thing, except, in a much more hygienic environment. Flushing your HTTP response early also provides everyone a benefit.

All that to say I’ve decided to rename PerfMatters.Flush to CourtesyFlush. I’ve “redirected” the old package to use the new one, so you don’t need to worry if you were using the old library. In addition, I’ve also added support for .NET 4.0 in this release.

PerfMatters.Flush Goes 1.0!

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

In my previous two performance related posts I’ve gone on and on about the benefits of flushing an HTTP response early and how to do it in ASP.NET MVC. If you haven’t read those yet, I recommend you take a quick moment to at least read Flushing in ASP.NET MVC, and if you have a little extra time go through More Flushing in ASP.NET MVC as well.

I think those posts did a decent job of explaining why you’d want to flush early. In this post I’m going to dig into the details of how to flush early using my library, PerfMatters.Flush.

Three Usage Patterns

The core of what you need to know to use PerfMatters.Flush is that I’ve tried to make it easy to use by providing a few different usage models. Pick the one that works best in your scenario, and feel free to mix and match across your application.

1. Attribute Based

The easiest way to use PerfMatters.Flush is via the [FlushHead] action filter attribute, like this:

[FlushHead(Title = "Index")]
public ActionResult Index()
{
      // Do expensive work here
      return View();
}

The attribute can be used alone for HTML documents with a static <head> section. Optionally, a Title property can be set for specifying a dynamic <title> element, which is very common.

2. Code Based

For more complex scenarios, extension methods are provided which allow you to set ViewData or pass along a view model:

public ActionResult About()
{
     ViewBag.Title = "Dynamic title generated at " + DateTime.Now.ToLocalTime();
     ViewBag.Description = "A meta tag description";
     ViewBag.DnsPrefetchDomain = ConfigurationManager.AppSettings["cdnDomain"];

     this.FlushHead();

     // Do expensive work here
     ViewBag.Message = "Your application description page.";
     return View();
}

As you can see, this mechanism allows for very dynamic <head> sections. In this example you could imagine a <title> element, <meta name="description" content="…"> attribute (for SEO purposes) and <link rel="dns-prefetch" href="…"> (for performance optimization) all being set.

3. Global Lambda

Finally, PerfMatters.Flush offers a model to flush early across all your application’s action methods – which simply leverages the same global action filters that have been in ASP.NET MVC for years now:

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
     filters.Add(new HandleErrorAttribute());
     filters.Add(new FlushHeadAttribute(actionDescriptor =>
         new ViewDataDictionary<CustomObject>(new CustomObject())
         {
             {"Title", "Global"},
             {"Description", "This is the meta description."}
         }));
}

In this case we pass a Func<ActionDescriptor, ViewDataDictionary> to the FlushHeadAttribute constructor. That function is executed for each action method. This example is pretty contrite since the result is deterministic, but you can see both a custom model (CustomObject) and ViewData in use at the same time.

In real world usage the actionDescriptor parameter would be analyzed and leveraged to get data from some data store (hopefully in memory!) or from an IOC container.

Installation & Setup

Getting up and running with PerfMatters.Flush is as easy as installing the NuGet package.

From there, you’ll want to move everything you’d like to flush out of _Layout.cshtml to a new file called _Head.cshtml (which sits in the same directory). Here’s an example of _Head.cshtml:

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0">
     @if (ViewBag.Description != null)
     {
         <meta name="description" content="@ViewBag.Description">
     }
     <title>@ViewBag.Title - My ASP.NET Application</title>
     @Styles.Render("~/Content/css")
     @Scripts.Render("~/bundles/modernizr")
</head>

Here’s its corresponding _Layout.cshtml file:

@Html.FlushHead()
<body>
     <!-- Lots of lovely HTML -->
     <div class="container body-content">
         @RenderBody()
         <hr />
         <footer>
             <p>&copy; @DateTime.Now.Year - My Application</p>
         </footer>
     </div>
     @Scripts.Render("~/bundles/jquery")
     @Scripts.Render("~/bundles/bootstrap")
     @RenderSection("scripts", required: false)
</body>
</html>

Notice the @Html.FlushHead() method on the first line? That’s the magic that stiches this all together. It allows you to use MVC the way your used to on action methods that you don’t want to flush early, and opt-in (via one of the usage model’s described above) when you do.

Wrapping Up

PerfMatters.Flush has been fun for me to work on. It’s currently being used in production on getGlimpse.com and has nicely boosted perceived performance on the pages that do any database queries.

To be honest, I wish that PerfMatters.Flush didn’t have to exist. I’ve asked the ASP.NET team to look into baking first class support for flushing early into the framework, but I don’t foresee that happening for quite awhile. Either way, there are tons of application built in MVC 3, 4 and 5 that can leverage this now.

The project is open source, and the code is hosted on GitHub. I’d love to hear your feedback on ways to make it better and easier to use.

Conference Session Videos Online

The past few weeks I was honored to be accepted at two European developer conferences: Techorama in Belgium and NDC in Norway. Both conferences were amazing, and I’m really hoping the organizers have me back again next year.

Over the span of both conferences I gave four presentations. I received lots of positive feedback about them, which I was really happy about. Most of them were recorded, and their video’s are now available online. Here’s there titles, abstracts, links to slides and any demo code and videos:

nonacat

Introducing Nonacat (Guerilla Hacking an Extra Arm onto GitHub) (Techorama)

GitHub, as instrumental as it is, knows that they cannot possibly offer a one-size-fits-all service that meets the needs to every OSS project. With that in mind, come join Nik Molnar, co-founder of Glimpse, for a session on how to extend GitHub by leveraging their API’s, cutting edge web technologies and free/open source tools to provide users, contributors and project maintainers with a better overall experience.
This session is not about Git itself and is suitable for OSS project maintainers and all users of GitHub.

Techorama did not record their sessions, but there is a slightly outdated recording of this session online from MonkeySpace last year.

mawssecrets

Azure Web Sites Secrets, Exposed! (NDC)

Microsoft’s premier cloud solution for custom web applications, Windows Azure Web Sites, has brought the DevOps movement to millions of developers and revolutionized the way that servers are provisioned and applications deployed.
Included with all the headline functionality are many smaller, less-known or undocumented features that serve to greatly improve developer productivity. Join Microsoft MVP and veteran web developer Nik Molnar for a whirlwind tour of these secret features and enhance your cloud development experience.
This beginner session is suitable for developers both using and curious about WAWS.

Watch the session on Vimeo.

fullstackwebperf

Full Stack Web Performance (Both)

Modern users expect more than ever from web applications. Unfortunately, they are also consuming applications more frequently from low bandwidth and low power devices – which strains developers not only to nail the user experience, but also the application’s performance.
Join Nik Molnar, co-founder of the open source debugging and diagnostics tool Glimpse, for an example-driven look at strategies and techniques for improving the performance of your web application all the way from the browser to the server.
We’ll cover how to use client and server side profiling tools to pinpoint opportunities for improvement, solutions to the most common performance problems, and some suggestions for getting ahead of the curve and actually surpassing user’s expectations.
This session covers a wide array of topics, most of which would be classified within the 200 level.

Watch the session on Vimeo.

If you have any thoughts or feedback on any of the sessions – please leave a comment! I’m always try to make my sessions better.

More HTTP Flushing in ASP.NET MVC

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

My last post about HTTP flushing in ASP.NET MVC generated quite a bit of buzz and several really good questions. If you haven’t yet, I recommend you read that post before continuing on here. With this post I’ll answer some of the questions that arose about flushing early and offer a few handy tips.

When to Flush?

Steve Souders does a great job of explaining this technique in his book, Even Faster Web Sites, and on his blog, but there still seems to be some confusion about flushing. For example, Khalid Abuhakmeh mentioned that it “seems arbitrary where you would place the flush” and asked:

tweet

While one could certainly flush lots of small chunks of content to the client, and there may be circumstances that call for doing just that, this recommendation is specifically aimed at (1) improving the user’s perceived performance and (2) giving the browser’s parser and network stack a head start to download the assets required to render a page.

Given that, the more content that can be flushed early, before any expensive server side processing (e.g. database queries or service calls), the better. Typically this means flushing just after the </head> tag, or just before the first @RenderSection()/@RenderBody() Razor directive.

Here’s an example of the difference that flushing makes on a waterfall chart displaying network activity:

difference

Notice that in the “Before” image, the two style sheets and one JavaScript file that are referenced in the the <head> section of the page aren’t downloaded until the original HTML file has finished downloading itself. When flushing the <head> section, however, the browser is able to start downloading these secondary resources and rendering any available HTML immediately – providing the user with visual feedback even sooner.

What’s more, if the style sheets contain rules that reference tertiary resources (e.g. background images, sprites or fonts) and the flushed HTML matches one of those rules, the tertiary resources will be downloaded early as well. Ilya Grigorik, a developer advocate that works at Google on the Chrome web performance team, recently wrote an a post about font performance with the tip to optimize the critical rendering path – which flushing directly helps with.

So basically, it’s best to flush as much of the beginning of an HTML response as you can to improve perceived performance and give the browser a head start on downloading not only secondary assets, but often times tertiary ones as well.

How can I See the Improvement?

The effects of flushing early is most easily seen on a waterfall chart. (New to waterfall charts? Check out Tammy Everts’ excellent Waterfalls 101 post.) But how does that correlate to the user’s experience? That’s where Speed Index comes in. Speed Index is currently the best metric we have to measure the perceived performance of a page load. WebPageTest.org measures Speed Index for any given page by doing a series of screen captures of the loading process, analyzing the images over time, and producing the metric.

logo_wpt

compare_progress
Screen captures of a loading page

chart-line-small
Speed Index comparison of two loads

The WebPageTest documentation covers the algorithm for calculating the metric in depth, and offers suggested targets based on the Alexa Top 300K. The lower a site’s Speed Index, the better.

Personally I’ve never seen flushing early increase a page’s Speed Index. In general, I can’t think of a way that it would hurt performance, but you may not need to use the technique if you don’t have anything in the head to download or any server side processing. As always, your mileage may vary and you should test the results on your own site.

What About Compression?

You should still compress your responses, flushing doesn’t change that recommendation! Compression affects the content that is sent to the client (and adds a Content-Encoding: gzip header).

Flushing, on the other hand, affects how the content is sent to the client (and adds a Transfer-Encoding: chunked header.)

headersHTTP Response with Compression and Flushing in Fiddler

The two options are completely compatible with each other. Souders reported some configuration issues with Apache flushing small chunks of content – but, based on my testing,  IIS doesn’t seem to have these problems. Using the default configuration, IIS compresses each chunk of flushed content and sends it to the client immediately.

How do I Debug?

Flushed HTTP responses are really no different that any other HTTP response. That means the in browser F12 development tools and HTTP proxies like Fiddler work as expected.

One caveat worth mentioning though is that Fiddler, by default, buffers text\html responses, which means that when Fiddler is running the benefits of flushing early won’t be observable in the browser. It’s easy enough to fix this though, simply click the “Stream” icon in the Fiddler toolbar, as covered in the Fiddler documentation.

streamIn Stream mode Fiddler immediately releases response chunks to the client

Are There Any Other Considerations?

Since flushing early is done in the name of giving the browser a head start, be sure to provide the browser’s parser everything it needs to efficiently read the HTML document as soon as possible.

Eric Lawrence, ex-product manager for Internet Explorer, has a post detailing the best practices to reduce the amount of work that the parser has to do to understand a web page. Essentially, begin HTML documents like this and the parser will not have to wait or backtrack:

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8">
     <meta http-equiv="X-UA-Compatible" content="IE=edge">
     <base /><!-- Optional -->
     <title>...</title>
     <!-- Everything else -->

What’s the Easiest Way to do This?

I’ve created a simple open source project called PerfMatters.Flush that attempts to make flushing early in ASP.NET MVC as simple as possible. It’s still very alpha-y, but it’s already live on the getGlimpse.com homepage. You can play with the bits now, or wait for my follow up post that details how to use it.

Flushing in ASP.NET MVC

Pluralsight If you like my blog, you’ll love my Pluralsight course:
Tracking Real World Web Performance

I’ve written a follow up to this post that answers many of the common questions about flushing early. Be sure to check it out.

The Setting

Before the beginning of this decade, Steve Souders released two seminal books on the topic of web performance: High Performance Web Sites and Even Faster Web Sites. The findings and subsequent suggestions that came out of those books changed the face of web development and have been codified into several performance analysis tools including Yahoo YSlow and Google PageSpeed.

High Performance Web Sites Even Faster Web Sites

Most professional web developers that I’ve met over the past five years are familiar with Souder’s recommendations and how to implement them in ASP.NET MVC. To be fair, they aren’t that difficult:

  • HTTP Caching and Content Compression can both be enabled simply via a few settings in web.config.
  • Layout pages make it easy to put stylesheets at the top of a page and scripts at the bottom in a consistent manner.
  • The Microsoft.AspNet.Web.Optimization NuGet package simplifies the ability to combine and minify assets.
  • And so on, and so forth…

The recommendation to “Flush the Buffer Early” (covered in depth in chapter 12 of Even Faster Web Sites), however, is not so easy to implement in ASP.NET MVC. Here’s an explanation of the recommendation from Steve’s 2009 blog post:

Flushing is when the server sends the initial part of the HTML document to the client before the entire response is ready. All major browsers start parsing the partial response. When done correctly, flushing results in a page that loads and feels faster. The key is choosing the right point at which to flush the partial HTML document response. The flush should occur before the expensive parts of the back end work, such as database queries and web service calls. But the flush should occur after the initial response has enough content to keep the browser busy. The part of the HTML document that is flushed should contain some resources as well as some visible content. If resources (e.g., stylesheets, external scripts, and images) are included, the browser gets an early start on its download work. If some visible content is included, the user receives feedback sooner that the page is loading.

A few months ago Steve revisited this guidance and provided examples of the difference that flushing can make to an application’s performance. His post inspired me to try the same on an ASP.NET MVC project that I’m working on, which led to this post.

The Conflict

Since .NET 1.1, ASP.NET has provided a mechanism to flush a response stream to the client with a simple call to HttpResponse.Flush(). This works quite well when you are incrementally building up a response, but the architecture of MVC, with its use of the command pattern, doesn’t really allow for this. (At least not in a clean manner.) Adding a call to Flush() inside a view doesn’t do much good.

@{
     Response.Flush();
}

This is because MVC doesn’t execute the code in the view until all the other work in the controller has completed – essentially the opposite of what the czar of web performance recommends.

The Climax

Because I’m a believer that #PerfMatters, I decided to take matters into my own hands to see if I could do anything better.

First, I realized that I could can get around a few issues by leveraging partial results, manually executing and flushing them, like so:

public ActionResult FlushDemo()
{
       PartialView("Head").ExecuteResult(ControllerContext);
       Response.Flush();

       // do other work
       return PartialView("RemainderOfPage");
}

I think that looks pretty ugly, so I’ve taken things a step farther and removed the cruft around executing the result by creating a base controller with its own Flush() method:

public class BaseController : Controller
{
     public void Flush(ActionResult result)
     {
         result.ExecuteResult(ControllerContext);
         Response.Flush();
     }
}

I think my new Flush() method clarifies the intent in the action method:

public ActionResult FlushDemo()
{
     Flush(PartialView("Head"));

     // do other work
     return PartialView("RemainderOfPage");
}

What I’d really like to be able to do is leverage the yield keyword. Yield seems like the natural language and syntax to express this. I was able to cobble together this example:

public IEnumerable<ActionResult> Test()
{
      yield return PartialView("Header");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Lorem");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Vestibulum");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Sed");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Footer");
}

I got that working with a pretty ugly hack, but leveraging the yield keyword and IEnumerable<ActionResult> like this should theoretically be possible by making a few changes to MVC’s default action invoker, no hack necessary.  Unfortunately, C# throws a few curve balls at this since you can’t combine the usage of yield with async/await – which I think would be a pretty common usage scenario.

The Resolution?

It looks like splitting up a layout file into multiple partial views and using my Flush() helper method is the best that we can do right now.

Unfortunately,

  • Yielding multiple views isn’t currently supported by MVC, and even if it was it would be incompatible with the current C# compiler.
  • Partial views are the best we can get to break up a response into multiple segments – which is painful. It’d be nice if I could ask Razor to render just a section of a layout page, which would reduce the need for partial views.

I’m hoping that the ASP.NET team, or my readers, can come up with a better way to handle flushing in the future, but I wanted to at least walk you through what your options are today.

For those interested, I might pull together some of this thinking into a NuGet package that leverages action filter attributes to simplify the syntax even further. If you like the sounds of that, encourage me on Twitter: @nikmd23

Post Navigation

Follow

Get every new post delivered to your Inbox.