nik codes

PerfMatters.Flush now CourtesyFlush

Its been nearly a month since I released PerfMatters.Flush on NuGet. The library has been getting people talking about performance and thinking how to improve their web applications.

Unfortunately for me, I regretted releasing the library almost instantly, or at least as soon as Mike O’Brien suggested a much better name for it on Twitter:

For the uninitiated, a courtesy flush refers to the first, early flush you do in the restroom to avoid stinking the place up.

banner

My library is basically the same thing, except, in a much more hygienic environment. Flushing your HTTP response early also provides everyone a benefit.

All that to say I’ve decided to rename PerfMatters.Flush to CourtesyFlush. I’ve “redirected” the old package to use the new one, so you don’t need to worry if you were using the old library. In addition, I’ve also added support for .NET 4.0 in this release.

PerfMatters.Flush Goes 1.0!

In my previous two performance related posts I’ve gone on and on about the benefits of flushing an HTTP response early and how to do it in ASP.NET MVC. If you haven’t read those yet, I recommend you take a quick moment to at least read Flushing in ASP.NET MVC, and if you have a little extra time go through More Flushing in ASP.NET MVC as well.

I think those posts did a decent job of explaining why you’d want to flush early. In this post I’m going to dig into the details of how to flush early using my library, PerfMatters.Flush.

Three Usage Patterns

The core of what you need to know to use PerfMatters.Flush is that I’ve tried to make it easy to use by providing a few different usage models. Pick the one that works best in your scenario, and feel free to mix and match across your application.

1. Attribute Based

The easiest way to use PerfMatters.Flush is via the [FlushHead] action filter attribute, like this:

[FlushHead(Title = "Index")]
public ActionResult Index()
{
      // Do expensive work here
      return View();
}

The attribute can be used alone for HTML documents with a static <head> section. Optionally, a Title property can be set for specifying a dynamic <title> element, which is very common.

2. Code Based

For more complex scenarios, extension methods are provided which allow you to set ViewData or pass along a view model:

public ActionResult About()
{
     ViewBag.Title = "Dynamic title generated at " + DateTime.Now.ToLocalTime();
     ViewBag.Description = "A meta tag description";
     ViewBag.DnsPrefetchDomain = ConfigurationManager.AppSettings["cdnDomain"];

     this.FlushHead();

     // Do expensive work here
     ViewBag.Message = "Your application description page.";
     return View();
}

As you can see, this mechanism allows for very dynamic <head> sections. In this example you could imagine a <title> element, <meta name="description" content="…"> attribute (for SEO purposes) and <link rel="dns-prefetch" href="…"> (for performance optimization) all being set.

3. Global Lambda

Finally, PerfMatters.Flush offers a model to flush early across all your application’s action methods - which simply leverages the same global action filters that have been in ASP.NET MVC for years now:

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
     filters.Add(new HandleErrorAttribute());
     filters.Add(new FlushHeadAttribute(actionDescriptor =>
         new ViewDataDictionary<CustomObject>(new CustomObject())
         {
             {"Title", "Global"},
             {"Description", "This is the meta description."}
         }));
}

In this case we pass a Func<ActionDescriptor, ViewDataDictionary> to the FlushHeadAttribute constructor. That function is executed for each action method. This example is pretty contrite since the result is deterministic, but you can see both a custom model (CustomObject) and ViewData in use at the same time.

In real world usage the actionDescriptor parameter would be analyzed and leveraged to get data from some data store (hopefully in memory!) or from an IOC container.

Installation & Setup

Getting up and running with PerfMatters.Flush is as easy as installing the NuGet package.

From there, you’ll want to move everything you’d like to flush out of _Layout.cshtml to a new file called _Head.cshtml (which sits in the same directory). Here’s an example of _Head.cshtml:

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0">
     @if (ViewBag.Description != null)
     {
         <meta name="description" content="@ViewBag.Description">
     }
     <title>@ViewBag.Title - My ASP.NET Application</title>
     @Styles.Render("~/Content/css")
     @Scripts.Render("~/bundles/modernizr")
</head>

Here’s its corresponding _Layout.cshtml file:

@Html.FlushHead()
<body>
     <!-- Lots of lovely HTML -->
     <div class="container body-content">
         @RenderBody()
         <hr />
         <footer>
             <p>&copy; @DateTime.Now.Year - My Application</p>
         </footer>
     </div>
     @Scripts.Render("~/bundles/jquery")
     @Scripts.Render("~/bundles/bootstrap")
     @RenderSection("scripts", required: false)
</body>
</html>

Notice the @Html.FlushHead() method on the first line? That’s the magic that stiches this all together. It allows you to use MVC the way your used to on action methods that you don’t want to flush early, and opt-in (via one of the usage model’s described above) when you do.

Wrapping Up

PerfMatters.Flush has been fun for me to work on. It’s currently being used in production on getGlimpse.com and has nicely boosted perceived performance on the pages that do any database queries.

To be honest, I wish that PerfMatters.Flush didn’t have to exist. I’ve asked the ASP.NET team to look into baking first class support for flushing early into the framework, but I don’t foresee that happening for quite awhile. Either way, there are tons of application built in MVC 3, 4 and 5 that can leverage this now.

The project is open source, and the code is hosted on GitHub. I’d love to hear your feedback on ways to make it better and easier to use.

Conference Session Videos Online

The past few weeks I was honored to be accepted at two European developer conferences: Techorama in Belgium and NDC in Norway. Both conferences were amazing, and I’m really hoping the organizers have me back again next year.

Over the span of both conferences I gave four presentations. I received lots of positive feedback about them, which I was really happy about. Most of them were recorded, and their video’s are now available online. Here’s there titles, abstracts, links to slides and any demo code and videos:

nonacat

Introducing Nonacat (Guerilla Hacking an Extra Arm onto GitHub) (Techorama)

GitHub, as instrumental as it is, knows that they cannot possibly offer a one-size-fits-all service that meets the needs to every OSS project. With that in mind, come join Nik Molnar, co-founder of Glimpse, for a session on how to extend GitHub by leveraging their API’s, cutting edge web technologies and free/open source tools to provide users, contributors and project maintainers with a better overall experience.
This session is not about Git itself and is suitable for OSS project maintainers and all users of GitHub.

Techorama did not record their sessions, but there is a slightly outdated recording of this session online from MonkeySpace last year.

mawssecrets

Azure Web Sites Secrets, Exposed! (NDC)

Microsoft’s premier cloud solution for custom web applications, Windows Azure Web Sites, has brought the DevOps movement to millions of developers and revolutionized the way that servers are provisioned and applications deployed.
Included with all the headline functionality are many smaller, less-known or undocumented features that serve to greatly improve developer productivity. Join Microsoft MVP and veteran web developer Nik Molnar for a whirlwind tour of these secret features and enhance your cloud development experience.
This beginner session is suitable for developers both using and curious about WAWS.

Watch the session on Vimeo.

fullstackwebperf

Full Stack Web Performance (Both)

Modern users expect more than ever from web applications. Unfortunately, they are also consuming applications more frequently from low bandwidth and low power devices – which strains developers not only to nail the user experience, but also the application’s performance.
Join Nik Molnar, co-founder of the open source debugging and diagnostics tool Glimpse, for an example-driven look at strategies and techniques for improving the performance of your web application all the way from the browser to the server.
We’ll cover how to use client and server side profiling tools to pinpoint opportunities for improvement, solutions to the most common performance problems, and some suggestions for getting ahead of the curve and actually surpassing user’s expectations.
This session covers a wide array of topics, most of which would be classified within the 200 level.

Watch the session on Vimeo.

If you have any thoughts or feedback on any of the sessions – please leave a comment! I’m always try to make my sessions better.

More HTTP Flushing in ASP.NET MVC

My last post about HTTP flushing in ASP.NET MVC generated quite a bit of buzz and several really good questions. If you haven’t yet, I recommend you read that post before continuing on here. With this post I’ll answer some of the questions that arose about flushing early and offer a few handy tips.

When to Flush?

Steve Souders does a great job of explaining this technique in his book, Even Faster Web Sites, and on his blog, but there still seems to be some confusion about flushing. For example, Khalid Abuhakmeh mentioned that it “seems arbitrary where you would place the flush” and asked:

tweet

While one could certainly flush lots of small chunks of content to the client, and there may be circumstances that call for doing just that, this recommendation is specifically aimed at (1) improving the user’s perceived performance and (2) giving the browser’s parser and network stack a head start to download the assets required to render a page.

Given that, the more content that can be flushed early, before any expensive server side processing (e.g. database queries or service calls), the better. Typically this means flushing just after the </head> tag, or just before the first @RenderSection()/@RenderBody() Razor directive.

Here’s an example of the difference that flushing makes on a waterfall chart displaying network activity:

difference

Notice that in the “Before” image, the two style sheets and one JavaScript file that are referenced in the the <head> section of the page aren’t downloaded until the original HTML file has finished downloading itself. When flushing the <head> section, however, the browser is able to start downloading these secondary resources and rendering any available HTML immediately – providing the user with visual feedback even sooner.

What’s more, if the style sheets contain rules that reference tertiary resources (e.g. background images, sprites or fonts) and the flushed HTML matches one of those rules, the tertiary resources will be downloaded early as well. Ilya Grigorik, a developer advocate that works at Google on the Chrome web performance team, recently wrote an a post about font performance with the tip to optimize the critical rendering path – which flushing directly helps with.

So basically, it’s best to flush as much of the beginning of an HTML response as you can to improve perceived performance and give the browser a head start on downloading not only secondary assets, but often times tertiary ones as well.

How can I See the Improvement?

The effects of flushing early is most easily seen on a waterfall chart. (New to waterfall charts? Check out Tammy Everts’ excellent Waterfalls 101 post.) But how does that correlate to the user’s experience? That’s where Speed Index comes in. Speed Index is currently the best metric we have to measure the perceived performance of a page load. WebPageTest.org measures Speed Index for any given page by doing a series of screen captures of the loading process, analyzing the images over time, and producing the metric.

logo_wpt

compare_progress
Screen captures of a loading page

chart-line-small
Speed Index comparison of two loads

The WebPageTest documentation covers the algorithm for calculating the metric in depth, and offers suggested targets based on the Alexa Top 300K. The lower a site’s Speed Index, the better.

Personally I’ve never seen flushing early increase a page’s Speed Index. In general, I can’t think of a way that it would hurt performance, but you may not need to use the technique if you don’t have anything in the head to download or any server side processing. As always, your mileage may vary and you should test the results on your own site.

What About Compression?

You should still compress your responses, flushing doesn’t change that recommendation! Compression affects the content that is sent to the client (and adds a Content-Encoding: gzip header).

Flushing, on the other hand, affects how the content is sent to the client (and adds a Transfer-Encoding: chunked header.)

headersHTTP Response with Compression and Flushing in Fiddler

The two options are completely compatible with each other. Souders reported some configuration issues with Apache flushing small chunks of content – but, based on my testing,  IIS doesn’t seem to have these problems. Using the default configuration, IIS compresses each chunk of flushed content and sends it to the client immediately.

How do I Debug?

Flushed HTTP responses are really no different that any other HTTP response. That means the in browser F12 development tools and HTTP proxies like Fiddler work as expected.

One caveat worth mentioning though is that Fiddler, by default, buffers text\html responses, which means that when Fiddler is running the benefits of flushing early won’t be observable in the browser. It’s easy enough to fix this though, simply click the “Stream” icon in the Fiddler toolbar, as covered in the Fiddler documentation.

streamIn Stream mode Fiddler immediately releases response chunks to the client

Are There Any Other Considerations?

Since flushing early is done in the name of giving the browser a head start, be sure to provide the browser’s parser everything it needs to efficiently read the HTML document as soon as possible.

Eric Lawrence, ex-product manager for Internet Explorer, has a post detailing the best practices to reduce the amount of work that the parser has to do to understand a web page. Essentially, begin HTML documents like this and the parser will not have to wait or backtrack:

<!DOCTYPE html>
<html>
<head>
     <meta charset="utf-8">
     <meta http-equiv="X-UA-Compatible" content="IE=edge">
     <base /><!-- Optional -->
     <title>...</title>
     <!-- Everything else -->

What’s the Easiest Way to do This?

I’ve created a simple open source project called PerfMatters.Flush that attempts to make flushing early in ASP.NET MVC as simple as possible. It’s still very alpha-y, but it’s already live on the getGlimpse.com homepage. You can play with the bits now, or wait for my follow up post that details how to use it.

Flushing in ASP.NET MVC

I’ve written a follow up to this post that answers many of the common questions about flushing early. Be sure to check it out.

The Setting

Before the beginning of this decade, Steve Souders released two seminal books on the topic of web performance: High Performance Web Sites and Even Faster Web Sites. The findings and subsequent suggestions that came out of those books changed the face of web development and have been codified into several performance analysis tools including Yahoo YSlow and Google PageSpeed.

High Performance Web Sites Even Faster Web Sites

Most professional web developers that I’ve met over the past five years are familiar with Souder’s recommendations and how to implement them in ASP.NET MVC. To be fair, they aren’t that difficult:

  • HTTP Caching and Content Compression can both be enabled simply via a few settings in web.config.
  • Layout pages make it easy to put stylesheets at the top of a page and scripts at the bottom in a consistent manner.
  • The Microsoft.AspNet.Web.Optimization NuGet package simplifies the ability to combine and minify assets.
  • And so on, and so forth…

The recommendation to “Flush the Buffer Early” (covered in depth in chapter 12 of Even Faster Web Sites), however, is not so easy to implement in ASP.NET MVC. Here’s an explanation of the recommendation from Steve’s 2009 blog post:

Flushing is when the server sends the initial part of the HTML document to the client before the entire response is ready. All major browsers start parsing the partial response. When done correctly, flushing results in a page that loads and feels faster. The key is choosing the right point at which to flush the partial HTML document response. The flush should occur before the expensive parts of the back end work, such as database queries and web service calls. But the flush should occur after the initial response has enough content to keep the browser busy. The part of the HTML document that is flushed should contain some resources as well as some visible content. If resources (e.g., stylesheets, external scripts, and images) are included, the browser gets an early start on its download work. If some visible content is included, the user receives feedback sooner that the page is loading.

A few months ago Steve revisited this guidance and provided examples of the difference that flushing can make to an application’s performance. His post inspired me to try the same on an ASP.NET MVC project that I’m working on, which led to this post.

The Conflict

Since .NET 1.1, ASP.NET has provided a mechanism to flush a response stream to the client with a simple call to HttpResponse.Flush(). This works quite well when you are incrementally building up a response, but the architecture of MVC, with its use of the command pattern, doesn’t really allow for this. (At least not in a clean manner.) Adding a call to Flush() inside a view doesn’t do much good.

@{
     Response.Flush();
}

This is because MVC doesn’t execute the code in the view until all the other work in the controller has completed – essentially the opposite of what the czar of web performance recommends.

The Climax

Because I’m a believer that #PerfMatters, I decided to take matters into my own hands to see if I could do anything better.

First, I realized that I could can get around a few issues by leveraging partial results, manually executing and flushing them, like so:

public ActionResult FlushDemo()
{
       PartialView("Head").ExecuteResult(ControllerContext);
       Response.Flush();

       // do other work
       return PartialView("RemainderOfPage");
}

I think that looks pretty ugly, so I’ve taken things a step farther and removed the cruft around executing the result by creating a base controller with its own Flush() method:

public class BaseController : Controller
{
     public void Flush(ActionResult result)
     {
         result.ExecuteResult(ControllerContext);
         Response.Flush();
     }
}

I think my new Flush() method clarifies the intent in the action method:

public ActionResult FlushDemo()
{
     Flush(PartialView("Head"));

     // do other work
     return PartialView("RemainderOfPage");
}

What I’d really like to be able to do is leverage the yield keyword. Yield seems like the natural language and syntax to express this. I was able to cobble together this example:

public IEnumerable<ActionResult> Test()
{
      yield return PartialView("Header");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Lorem");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Vestibulum");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Sed");
      Thread.Sleep(2000); // or other work

      yield return PartialView("Footer");
}

I got that working with a pretty ugly hack, but leveraging the yield keyword and IEnumerable<ActionResult> like this should theoretically be possible by making a few changes to MVC’s default action invoker, no hack necessary.  Unfortunately, C# throws a few curve balls at this since you can’t combine the usage of yield with async/await – which I think would be a pretty common usage scenario.

The Resolution?

It looks like splitting up a layout file into multiple partial views and using my Flush() helper method is the best that we can do right now.

Unfortunately,

  • Yielding multiple views isn’t currently supported by MVC, and even if it was it would be incompatible with the current C# compiler.
  • Partial views are the best we can get to break up a response into multiple segments – which is painful. It’d be nice if I could ask Razor to render just a section of a layout page, which would reduce the need for partial views.

I’m hoping that the ASP.NET team, or my readers, can come up with a better way to handle flushing in the future, but I wanted to at least walk you through what your options are today.

For those interested, I might pull together some of this thinking into a NuGet package that leverages action filter attributes to simplify the syntax even further. If you like the sounds of that, encourage me on Twitter: @nikmd23

Packaging Source Code With NuGet

.NET developers depend on binaries a lot. Assemblies and executables are our typical unit of deployment and consumption and NuGet itself is basically a sophisticated binary distribution system.

I don’t think there is anything necessarily wrong with that. Assemblies offer a lot of power and have plenty of benefits, but a few times over the last year or so I’ve wondered if our assembly usage follows the law of the instrument.

hammerI suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail. - Abraham Maslow

Plenty of other development communities very happily exist in a world without assemblies or binaries. The Ruby gems and npm repositories are absolutely filled to the brim with packages that contain no binary files whatsoever. What do they contain? Source code.

Recently I’ve been wanting to create a NuGet package that would contain common AutoFixture configuration for intrinsic ASP.NET types that I find myself repeatedly using. Unfortunately, every other user of the package would want to slightly tweak the configuration for their project, so I figured I’d simply ship the package with only the source code (no pre-compiled assembly) so that they could easily do that.

When I told Louis DeJardin about this idea earlier in the year at MonkeySpace, his eyes lit up and he got excited. It turns out he’s been thinking about source only packages quite a bit and has even released several OWIN packages as source only.

Louis told me about a few best practices and conventions that he has found useful when creating source only packages. I’ve noticed that only Damian Hickey has picked up on these, so with Louis’ permission, I’ve decided to publish them here for greater exposure:

NuGet Source-Only Package Conventions and Practices

  1. Package ID’s should end in .Sources, which aligns well with and builds upon the .Sample suffix recommended in the NuGet Package Conventions documentation.
  2. In the .nupkg itself, source code files should be placed in a directory with the path content/App_Packages/{package-id}.{package-version}. Similar to how the .Sources suffix builds upon the prior art of .Samples, the App_Packages directory follows the App_Start folder nomenclature used by David Ebbo’s popular WebActivator package.

    Here’s an example of a package following this convention, as seen in NuGet Package Explorer:
    path
    This convention also allows for a very convenient upgrade path which I’ll cover later on.

  3. Source-Only packages can depend on any other package, including other Source-Only packages.
  4. Source-Only packages may leverage NuGet’s Source Code Transformations (*.pp files) to inject project properties into the source. This is most often seen with the use of the $rootNamespace$ property, but any project property can be used.
  5. In some situations, it may be useful to also ship a standard “binary” package in addition to the Source-Only package.
  6. Types in Source-Only packages should be marked as internal by default in order to not pollute the target code’s public API.
  7. Consider using conditional compilation symbols or partial classes in Source-Only packages to provide flexibility and the ability to customize the source to users.

    Examples of this technique include allowing for multiple platform targeting options and changing the accessibility of types to public when desired. SimpleJson has a few good examples of this in their source code.

When to Create Source-Only Packages

There are plenty of situations where a Source-Only package does not make sense. Here’s a few things to consider:

  • DO consider creating a Source-Only package for “utility” libraries that feature heavy usage of static and/or extension methods. Examples of these types of utility libraries include unit test assertion libraries (see note below!) and the popular DataAnnotationsExtensions package.
  • DO consider creating a Source-Only package for small single purpose libraries. SimpleJson is already doing this (though not following these conventions) but you can image any code appropriate for a blog post or Gist would fit the definition well. (Aside: a Gist to Source-Only NuGet Package service would be pretty useful!)
  • DO consider creating a Source-Only package for common configuration and setup code or any code which will require tweaking by the user.
  • DO NOT consider creating a Source-Only package as a means to simply make step-debugging easier. Instead leverage a symbols package.

Source-Only Package Update Process

One of the nice things about assemblies is that the process of versioning them is well understood. How do we version and update source-only NuGet packages? Luckily, NuGet’s native handling of content works in our favor. Let’s explore with an example:

I’ve created a source-only package called Reverse.Sources that contains an extension method for reversing a string. Let’s install it:

Install-Package Reverse.Sources -Version 1.0.0

install01

Great, now our project contains that StringExtensions.cs file with an implementation to reverse strings. It compiles right along with our application and is relatively out of our way.

Unfortunately, version 1.0.0 of my package had a bug and blows up if a null string is passed into the Reverse method. I’ve added a simple guard clause to fix the problem and released it in version 1.1.0. Let’s update:

Update-Package Reverse.Sources

install02

Notice Solution Explorer looks nearly identical – Reverse.Sources.1.0.0 was simply replaced, along with all of it’s files, by Reverse.Sources.1.1.0. I’ve updated my project without any troubles and I have that nice bug fix now.

But what if we had made changes to StringExtensions.cs? NuGet would have simply left behind the files you’ve edited.

install03

We’d know that there was a problem too because the compiler would complain with a “The namespace 'Sources_Example' already contains a definition for 'StringExtensions'” error.

To fix that error we can use a text diff/merge tool to move the changes over and delete the old 1.0.0 folder.

To me this is a pretty clean upgrade process. Sure we could sometimes get into a situation where we have to deal with merging, but we also get the benefits of working directly with the code.

Are Source-Only Packages A Bad Idea?

Perhaps yes, perhaps no. I don’t think they are that crazy of an idea though. Large numbers of developers outside the .NET ecosystem already work almost exclusively with source only packages. Further, I’d propose that you do as well if you’ve ever included jQuery, Modernizr or any other JavaScript NuGet package in your project.

I for one probably wouldn't want all packages to be source-only (we certainly won’t be shipping Glimpse like this!), but there are situations where I think it could be extremely useful – particularly in scenarios involving extensibility/tweaking and reducing dependency overhead when I might have reached for ILMerge otherwise.

I’m hoping that this post can start a bit of a conversation about the idea of source only packages and increase the community’s comfort with them. I’m interested to hear the thoughts of the NuGet team and will be submitting these conventions to the NuGet documentation shortly.


Note: It looks like this is on Brad Wilson’s mind too!


Updated Oct 25th with feedback from Phil Haack, Louis DeJardin, Prabir Shrestha and Andrew Nurse. Thanks for the feedback guys!

Enjoy the Perks of Open Source Development

I’ve been working on Glimpse for a little over two and a half years now. Helping the community and collaborating with brilliant developers worldwide is really rewarding. It’s certainly the biggest benefit of working on open source software.

There are plenty of other smaller perks though. One of my favorites is the availability of all types of free world class tooling and services that assist in the efforts of any given project. I use several of them, for instance:

When I think about all the sponsorships I’ve received, I start to feel a bit like this guy who is completely hooked up by companies who are willing to help him do what he does best:

driver

I’m very thankful for all of the support I’ve received. Neither my projects nor I would be in the position we are in without the help. But here’s the secret: I’m not special. Licenses and freebees are available to most open source projects, you just have to know where to ask for them. Unfortunately, figuring that out can be intimidating, which is why I decided to launch ossPerks.com.

logo

OSS Perks is a crowd sourced listing of all the products and services that are made freely available to open source projects. It started in public beta a few months ago with just a listing of the perks that I knew about. Since then the list has more than tripled via contributions from several different people and a Twitter account has been created so that developers can follow along as perks are added to the list.

The site is open source in and of itself, so you can easily add a new perk that you’ve found. I’m leveraging GitHub Pages and Jekyll to maintain the list so it’s very easy to contribute to directly from your browser by just editing our YAML file. (YAML is a plain text data format popular in the Ruby community, think of it as the offspring between INI file syntax and JSON.)

If you’d like to contribute but this all sounds a bit foreign to you, I’ve got a quick little tutorial right on the repositories readme file that should be able to get you up and running.

So go take a look at the available perks and take full advantage of the support offered to you for your open source projects. Trust me, it will make you feel like a world class athlete (or race car driver Winking smile)!

PS – Fellow Microsoft MVP Xavier Decoster has forked the code for OSSPerks and created MVPPerks, a similar listing of the perks available to MVPs!

Testing MVC View Names with [CallerMemberName]

We’ve all written ASP.NET MVC controller actions that simply return a view. They tend to look just like this:

public ActionResult Index()
{
   
return View(); }

It’s also fairly common to test those actions to ensure that the proper view is being returned:

public void Should_Return_Index_View(HomeController sut)
{
    
var
actionResult = sut.Index();
    
var viewResult = actionResult as ViewResult
;
   

Assert
.Equal("Index", viewResult.ViewName); }

If you’ve tried this before you know that the above test will fail. Why? Because viewResult.ViewName actually has a value of string.Empty, not Index. MVC doesn't mind this, because if a view’s name is empty, it simply defaults to use the value of {action} from the route data collection. That unfortunately makes it difficult to ensure the proper view is being returned in tests.

We could re-work our action method to explicitly specify the name of the view returned (return View("Index")), but I really like the simplicity of having a default convention for view names. Instead I’ve been leveraging C# 5.0’s new caller information attributes to retain the simplicity of the convention, but still have the view name explicitly set for testing purposes.

To do this I’ve added the following two methods to the base controller that all of my project’s controllers inherit from:

public new ActionResult View([CallerMemberName] string actionMethod = null)
{
    
return base
.View(actionMethod); } public ActionResult View(object model, [CallerMemberName] string actionMethod = null) {
    
return base.View(actionMethod, model); }

Now, when I re-execute the test, it passes. That’s because the actionMethod parameter is automatically set by the compiler to be the name of the calling method, which in this case is Index. I can still pass in an overriding view name whenever I want, and the compiler will just get out of the way. Nifty!

NOTE: I will point out that you may want to consider if you even need to test the name of a ViewResult to begin with. Assuming you do though, I find this technique to be much cleaner than trying to mock out a whole request just to see what the {action} is.

Semantic Markdown

markdown

Markdown, the lite, little, plain-text markup language that could just keeps growing and growing.

It’s heavily leveraged by two of the most prominent developer communities on the internet, GitHub and StackOverflow, and is supported on more sites everyday. To be honest, I quite frequently find myself accidentally typing Markdown into Gmail because I’ve just grown so accustomed to it. Markdown has several good browser based editors (like Markable.in and Dillinger.io) as well as fantastic desktop editors for Windows (MarkdownPad) and OSX (Mou) as well we parsing libraries for every programming platform on the planet. Markdown is almost nine years old now and at this point I think it’s safe to say that it has become ubiquitous in the information technology space. What’s particularly interesting to me, and what this post it about, is the similarities in growth patterns between Markdown and the heavy weight champion of rich text markup; HTML.

During the late 90’s browser wars, vendors “embraced and extended” HTML in all kinds of crazy ways. (Remember the <blink> and <marquee> tags?) Likewise, Markdown is “embraced and extended” in the form of “flavors”, the most famous of which is GitHub’s flavored Markdown. GitHub has extended the language with all sorts of auto-hyperlinking syntactic sugar, syntax highlighted code and to-do lists complete with checkbox status indicators. The Markdown Extra project adds the ability to create tables, definition lists, footnotes and abbreviations. MultiMarkdown includes bibliographical style citations, support for mathematic equations, and “critic markup” similar to how you’d see an editor review a document. Aaron Parecki has enhanced Markdown to support user defined macros and binding to YAML data sets and there are several other flavors roaming around out there (which mostly do less interesting things).

Here’s some examples of Markdown flavors:

GITHUB FLAVOR
=============

## syntax highlighted code
```ruby
require 'redcarpet'
markdown = Redcarpet.new("Hello World!")
puts markdown.to_html
```

## to-do list
- [x] list syntax required (any unordered or ordered list supported)
- [x] this is a complete item
- [ ] this is an incomplete item

MARKDOWN EXTRA FLAVOR
=====================

## table
| Item      | Value |
| --------- | -----:|
| Computer  | $1600 |
| Phone     |   $12 |
| Pipe      |    $1 |

## definition list
Apple
:   Pomaceous fruit of plants of the genus Malus in 
    the family Rosaceae.
Orange
:   The fruit of an evergreen tree of the genus Citrus.

## footnote
That's some text with a footnote.[^1]
[^1]: And that's the footnote.

MULTIMARKDOWN FLAVOR
====================

## citation
Let's cite a fake book.[p. 42][#fake]
[#fake]: John Doe. *A Totally Fake Book*. Vanity Press, 2006. 

## review
Strike out a {--word--}.
Insert a {++word++}.
Replace a {~~word~>with another~~}.
Highlight a {==word==}.
Comment on something.{>>This is a comment<<}.

AARON PARECKI FLAVOR
====================

## yaml binding
---
title: Semantic Markdown
author: {first: Nik, last: Molnar}
description: some yaml content
---
This post, '#{title}', is by #{author.first}...

## macro
![:vimeo 600x400](13697303)
![:youtube 600x400](G-M7ECt3-zY)
![:github](nikmd23/ossperks)

Of course, just like HTML, the deviations from “the Markdown standard” have some people concerned. Jeff Atwood posted a call to action to see Markdown standardized and the W3C Markdown Community Group has produced a set of standard Markdown tests to test parsers with. We’ll all have to wait and see what shakes out of the debates moving forward.

The really exciting, somewhat recent, advancement in this space is the rise of what I’m calling Semantic Markdown. Semantic Markdown, like semantic HTML, is just standard (or flavor enhanced) Markdown that is structured in such a way to make it machine readable.

I’ve been thinking about this technique for several months now since it is the cornerstone of Semantic Release Notes, a specialized format for writing software release notes that Anthony and I have been working on in our spare time. The idea is simple, a user writes Markdown formatted in such a way that our parser can understand what new features, breaking changes and bugs were addressed in a given release of a piece of software.

Turns out, we’re not the only ones who have had this idea!

API Blueprint uses Semantic Markdown to document REST and web API’s. The input Semantic Markdown can be automatically turned into an interactive documentation website, test suites, mocks and more. The creators of API Blueprint even have tools that help developers generate Markdown from cURL trace output.

With much less focus on developers, Memofon uses Semantic Markdown to generate interactive, shareable mind maps. For example this Markdown:

MEMOFON
=======

- _italic_
- **bold**
- [links](http://google.com)
 - images
	![](/images/doc/grasshopper.png)
- blockquote
  > The question is, whether you can make
  >> words mean so many different things.
- code
    var test = function test() {
      return this.isTest();
    };

produces this mind map:

mindmap

which is extremely cool!

Finally, Fountain, a Semantic Markdown based screenplay authoring format that provides for things like characters, scenes and dialog is rendered out in the standard screenplay format. This can be seen in the Fountain and PDF versions of the screenplay for the 2003 movie Big Fish.

I really like the idea of Semantic Markdown and am glad to see it beginning to gain traction. Of course it’s easy to write, requires no special tooling and is human readable – but all those benefits come directly from Markdown itself. It’s also easy to parse out domain specific semantics once the Markdown has been parsed into HTML because libraries like CsQuery and HTML Agility Pack make it easy to navigate HTML with CSS selectors and XPath expressions, at least more approachable than dealing with abstract syntax trees yourself.

So take a spin with these Semantic Markdown tools and continue to expand the possible uses of Markdown!

Introducing Nonacat: Web Extensibility and Hacking GitHub

A few weeks ago I was honored to be selected as a speaker at MonkeySpace 2013 in Chicago.

monkeyspacelogo

For the uninitiated, MonkeySpace (formerly MonoSpace) is a cross-platform and open source conference which covers topics such as developing for the iPhone, Android, Mac, and *nix platforms using .NET technologies. It’s put on by the good folks at MonkeySquare, and has quickly become one of my favorite conferences.

My presentation was titled “Introducing Nonacat (Guerilla Hacking an Extra Arm onto GitHub)” and covered lots of “web extensibility” techniques, each with an example of extending the GitHub website/service. The presentation’s namesake, Nonacat, is a space-aged mutant version of GitHub’s Octocat with an extra arm. (“Nona” being the prefix for nine.)

nonacat
Nonacat designed by @headloose

The presentation covered lots of extensibility techniques and tools, which I promised I’d enumerate with plenty of links on my blog. So, without further ado, here is my list of wonderful tools to extend GitHub:

  • MarkdownPad – The only Windows based markdown editor I know of that fully supports GitHub flavored markdown, as well as instant preview and many other useful features. Great for working on long .md files.
  • Emoji Cheat Sheet – An online visual listing of all the emoji icons that GitHub (and several other services) support with nice click-to-copy functionality for quickly dropping an emoji into an issue comment. I’m sure this site was invaluable for the authors of Emoji Dick.
  • Contributing.md – A GitHub convention that allows for a repository owner to hook a message into “GitHub’s chrome” and describe the way that users should contribute to a project.
  • 5 Minute Fork – An ingenious little service from Remy Sharp that allows a user to clone and host a repository online with a click of a button. Here’s an example link, which when clicked will automatically clone my Oss Zero to Sixty repository and host it for you to browse online.
  • Huboard – A web based GitHub issue management GUI with a Trello/kanban board vibe.
  • jsFiddle – Many of my readers will know about jsFiddle, but did you know that it automatically hooks into GitHub Gist’s? This is a great feature to leverage to enable shareable, executable code samples.
  • Executify – A service very similar to jsFiddle, but for C# code based on scriptcs.
  • User Scripts – There are lots of pre-build portable little user scripts out there that enhance the GitHub experience. User scripts are much lighter weight than browser extensions and very easy to write since they are based on JavaScript and the browser API’s you already know. I’ve created one for tracking user votes on GitHub issues.
  • Web Hooks – Not a tool specifically, but rather a technique. I covered tools useful to debug a web hook including RequestBin, PageKite and my favorite: nGrok.
  • Revision.io – A service for creating a shareable, embeddable change log from your GitHub repository.
  • Signatory.io – A service I created to demonstrate GitHub’s API and web hooks which allows repository owners to manage their contributor license agreements painlessly.
  • Diagramming Tools – If a picture is worth a thousand words, than we should be putting more of them into our online conversations. AsciiFlow (ascii diagrams), WebSequenceDiagrams.com (sequence diagrams), MemoFon (mind maps) and yUml (uml class diagrams) allow you to do just that with convenient (and editable) text based input mechanisms.
  • Issue2PR – I didn’t find out about this service until after the conference, but it is very handy as it allows you to convert any given issue in your repository to a pull request.

If you have a service, tool or technique that improves your GitHub experience, please do share it in the comments.

NOTE: The video of this presentation, although not as high quality as I’d like, is now available:

Introducing Nonacat (Guerilla Hacking an Extra Arm onto GitHub) – Nik Molnar from Monkey Square on Vimeo.

Post Navigation

Follow

Get every new post delivered to your Inbox.