nik codes

Archive for the category “.Net”

PerfMatters.Flush now CourtesyFlush

Pluralsight If you like my blog, you’ll love my Pluralsight courses:
Tracking Real World Web Performance
WebPageTest Deep Dive

Its been nearly a month since I released PerfMatters.Flush on NuGet. The library has been getting people talking about performance and thinking how to improve their web applications.

Unfortunately for me, I regretted releasing the library almost instantly, or at least as soon as Mike O’Brien suggested a much better name for it on Twitter:

For the uninitiated, a courtesy flush refers to the first, early flush you do in the restroom to avoid stinking the place up.


My library is basically the same thing, except, in a much more hygienic environment. Flushing your HTTP response early also provides everyone a benefit.

All that to say I’ve decided to rename PerfMatters.Flush to CourtesyFlush. I’ve “redirected” the old package to use the new one, so you don’t need to worry if you were using the old library. In addition, I’ve also added support for .NET 4.0 in this release.

PerfMatters.Flush Goes 1.0!

Pluralsight If you like my blog, you’ll love my Pluralsight courses:
Tracking Real World Web Performance
WebPageTest Deep Dive

In my previous two performance related posts I’ve gone on and on about the benefits of flushing an HTTP response early and how to do it in ASP.NET MVC. If you haven’t read those yet, I recommend you take a quick moment to at least read Flushing in ASP.NET MVC, and if you have a little extra time go through More Flushing in ASP.NET MVC as well.

I think those posts did a decent job of explaining why you’d want to flush early. In this post I’m going to dig into the details of how to flush early using my library, PerfMatters.Flush.

Three Usage Patterns

The core of what you need to know to use PerfMatters.Flush is that I’ve tried to make it easy to use by providing a few different usage models. Pick the one that works best in your scenario, and feel free to mix and match across your application.

1. Attribute Based

The easiest way to use PerfMatters.Flush is via the [FlushHead] action filter attribute, like this:

[FlushHead(Title = "Index")]
public ActionResult Index()
      // Do expensive work here
      return View();

The attribute can be used alone for HTML documents with a static <head> section. Optionally, a Title property can be set for specifying a dynamic <title> element, which is very common.

2. Code Based

For more complex scenarios, extension methods are provided which allow you to set ViewData or pass along a view model:

public ActionResult About()
     ViewBag.Title = "Dynamic title generated at " + DateTime.Now.ToLocalTime();
     ViewBag.Description = "A meta tag description";
     ViewBag.DnsPrefetchDomain = ConfigurationManager.AppSettings["cdnDomain"];


     // Do expensive work here
     ViewBag.Message = "Your application description page.";
     return View();

As you can see, this mechanism allows for very dynamic <head> sections. In this example you could imagine a <title> element, <meta name="description" content="…"> attribute (for SEO purposes) and <link rel="dns-prefetch" href="…"> (for performance optimization) all being set.

3. Global Lambda

Finally, PerfMatters.Flush offers a model to flush early across all your application’s action methods – which simply leverages the same global action filters that have been in ASP.NET MVC for years now:

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
     filters.Add(new HandleErrorAttribute());
     filters.Add(new FlushHeadAttribute(actionDescriptor =>
         new ViewDataDictionary<CustomObject>(new CustomObject())
             {"Title", "Global"},
             {"Description", "This is the meta description."}

In this case we pass a Func<ActionDescriptor, ViewDataDictionary> to the FlushHeadAttribute constructor. That function is executed for each action method. This example is pretty contrite since the result is deterministic, but you can see both a custom model (CustomObject) and ViewData in use at the same time.

In real world usage the actionDescriptor parameter would be analyzed and leveraged to get data from some data store (hopefully in memory!) or from an IOC container.

Installation & Setup

Getting up and running with PerfMatters.Flush is as easy as installing the NuGet package.

From there, you’ll want to move everything you’d like to flush out of _Layout.cshtml to a new file called _Head.cshtml (which sits in the same directory). Here’s an example of _Head.cshtml:

<!DOCTYPE html>
     <meta charset="utf-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0">
     @if (ViewBag.Description != null)
         <meta name="description" content="@ViewBag.Description">
     <title>@ViewBag.Title - My ASP.NET Application</title>

Here’s its corresponding _Layout.cshtml file:

     <!-- Lots of lovely HTML -->
     <div class="container body-content">
         <hr />
             <p>&copy; @DateTime.Now.Year - My Application</p>
     @RenderSection("scripts", required: false)

Notice the @Html.FlushHead() method on the first line? That’s the magic that stiches this all together. It allows you to use MVC the way your used to on action methods that you don’t want to flush early, and opt-in (via one of the usage model’s described above) when you do.

Wrapping Up

PerfMatters.Flush has been fun for me to work on. It’s currently being used in production on and has nicely boosted perceived performance on the pages that do any database queries.

To be honest, I wish that PerfMatters.Flush didn’t have to exist. I’ve asked the ASP.NET team to look into baking first class support for flushing early into the framework, but I don’t foresee that happening for quite awhile. Either way, there are tons of application built in MVC 3, 4 and 5 that can leverage this now.

The project is open source, and the code is hosted on GitHub. I’d love to hear your feedback on ways to make it better and easier to use.

Understanding AppDomain Unloads & Old Lessons, Learned Anew

I spent the better part of two days pulling my hair out about a bug report claiming a website that opened an OleDbConnection to Microsoft Access would break Glimpse, my open source project.

I couldn’t believe it. Glimpse knows nothing about OleDb or Access, but alas, the bug was reported and I was able to reproduce it.

With reproduction in hand, I quickly figured out what was really happening. Opening the connection to Access caused the current AppDomain to unload, which in turn tore down Glimpse’s internal data stores resulting in the undesired behavior.

But why was the AppDomain being unloaded? I know tons of applications out there, running on little web servers underneath people’s cubicle desks, leverage Access as a data backend. Surely they don’t recycle the AppDomain after every database connection, do they?

Turns out, this problem would have been very easy to solve if I hadn’t oversimplified ASP.NET’s behavior in my head. I’ve known for a long time that changing a site’s web.config causes the AppDomain to recycle, but that’s only a partial truth now-a-days. With the help of a post by Tess Ferrandez I found souring the internet, I learned that as of ASP.NET 2.0 the file system watcher watches much more than just web.config for changes – including everything in the bin and its subdirectories. (I found out about several other reasons why an AppDomain might recycle from her excellent post too.)

Once I correctly understood modern ASP.NET’s behavior, I realized that connecting to the .MDB Access file in my bin directory was causing the file system watcher to kick off an application domain unload. Awesome! I moved the database out of the bin and the problem was solved.

Of course, after wasting more than a day of tinkering with this, I thought “There must be a way to figure this out without Google’ing high and wide”.

Turns out, there is! Unfortunately it’s not very obvious and requires some private reflection. Scott Gu covers the technique in a blog post from late 2005, before I was even involved in .NET.

The (simplified!) code, which you’d place in the AppDomain.DomainUnload event handler, looks like this:

var httpRuntimeType = typeof(HttpRuntime);             

var httpRuntime = httpRuntimeType.InvokeMember(
    BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, 
    null, null, null) as HttpRuntime;             

var shutDownMessage = httpRuntimeType.InvokeMember(
    BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, 
    null, httpRuntime, null) as string;

The shutDownMessage will be a string describing the exact reason why the AppDomain is being recycled – which is great to log and saves lots of guess work.

Now that I know this technique, I have now begun logging all AppDomain recycles within Glimpse’s internal log in an effort to aid diagnosis of future problems like this.

It’s amazing just how easy it is to take for granted the mountain of code we build on top of everyday. This year I’m really going to try and not gloss over the lessons learned by those who came before us – just a mere 8 years ago.

– By

New Job

“Choose a job you love, and you will never have to work a day in your life”.

It’s no secret that my love of web development has materialized into Glimpse, the project I started a little over a year ago with Anthony Van der Hoorn. I’ve had an amazing ride with Glimpse; becoming an ASP Insider, getting to meet many of my dev heroes and working with fantastic contributors from all around the world.

About a month ago that ride took a new and unexpected turn. An opportunity to work full time with the open source community on our little project arose from a seemingly unexpected source: Red Gate.

Red Gate

I was taken surprised by Red Gate’s proposal and spent several weeks with Anthony and Red Gate’s top brass discussing our philosophies of open development, open source, ASP.NET, the web development community, software development practices and yes, even Reflector. As the discussions progressed I became more and more convinced that Red Gate genuinely believed in our vision of Glimpse, our model of development, and most importantly, the community that we were serving. Given that, I decided to take a new job with Red Gate – getting to focus a much larger portion of my time on the things I love: Glimpse and the web development community.

There are a lot more details about what this means for Glimpse on the Glimpse blog, so I won’t rehash that here, but in summary it’s all positive. From a personal perspective, I will remain in New York City, and, in addition to spending a majority of my time working on Glimpse, I have also been given the opportunity to interact with the community. This basically means that part of my new position requires writing blog posts, speaking at user groups/meetups, working with open source and attending conferences like Monospace in October.

This is all new and very exciting to me. Keep an eye on this blog and my twitter feed for updates as things progress.

– By

NYC ALT.NET: Building Win8 Metro Apps with JS & HTML

At the July meeting of the New York ALT.NET Meetup, Microsoft developer evangelist Rachel Appel presented a session on building Windows 8 Metro style applications with JavaScript and HTML. Here are my notes from that presentation.218420_980

  • In a room of ~40 developers, all said they felt comfortable with JavaScript, but only one or two considered themselves to have any design skills.
  • The presentation started with a quick demo of Windows 8 running on Rachel’s tablet. It was all pretty standard stuff if you’ve seen Windows 8. If you haven’t – head straight over to the Building Windows 8 blog and read up right away!
  • To build Windows 8 apps, you have to have Visual Studio 2012 running on Windows 8 itself. Visual Studio 2012 does work on Windows 7, you just can’t create Windows 8 Metro apps with that configuration.
  • Out of the box, Visual Studio 2012 has several JavaScript based Metro templates that have lots of build in features, including some layout and styling along with the usual library references.
  • There does not appear to be a built in UI pattern like MVC or MVVM. I assume that libraries that help structure code, like backbone.js or knockout.js, will be popular.
  • There is a concept of “pages” (not HTML pages) built into the template. These were not covered in depth – I need to research this more.
  • I was happy to see the ECMAScript5’s “use strict” directive in use in the template code.
  • No additional work is needed to get basic touch gestures working when using the JavaScript controls.
  • Running (IE:F5 Debugging) an application from Visual Studio causes the application to open up in full screen mode. This makes break point debugging very painful with lots of Alt+Tab window switching.
  • A simulator (not emulator) can be used to ease this pain. The simulator is basically a remote desktop connection, back into your own machine, which shows the application running along with tooling to do things like rotate the device orientation and simulate touch events/gestures.
  • Once an application is deployed to the app store, and downloaded by a user, it is placed in an obscure Program Files directory. If a user finds the application, they would be able to view the source JavaScript. If this is a concern, Id recommend using a JavaScript obfuscator like UglifyJS.
  • Visual Studio will show you the JavaScript source for the core libraries, but it will warn and stop you from changing said source. Since JavaScript is dynamic, you could replace a method implementation at runtime.
  • Visual Studio tries to guide developers into properly implementing the Metro UX guidelines. A full set of documentation and resources can be found in the Windows Dev Center.
  • Interesting point made about the reduced relevancy for HTTP caching/CDN’s in HTML based Metro apps since a majority or resources (sans data/JSON) will be included in the bundle.
  • Metro style JavaScript applications run in IE10 under the covers – no surprise there.
  • Html “controls” use standard elements (IE: divs) with data-* attributes.
  • Application manifest file (XML) contains tons of settings, but there is a nice editor for the file that hides away the XML and makes editing “easy”.
  • Access to device API’s (camera, geolocation, etc) must be approved by user, similar to Facebook’s permission model.
  • Background tasks are supported.
  • Applications can have many entry points like a tile click or a search result.
  • Side loading of applications is handled by Visual Studio.
  • Tiles can be short (square) or wide (rectangle), live or “dead” – users choose and developers should accommodate those preferences.

All in all, it was a very informative session. It was video recorded, so I’ll update this post as soon as the video is posted online.

– By

Integrating SpecFlow and xUnit

This post is all about getting SpecFlow, my favorite .NET BDD framework, and xUnit, my favorite .NET unit testing framework, playing along nicely together.

NuGet itself does most of the heavy lifting here, but there are a couple of gotchya’s along the way. First, the easy stuff:

  1. Create a standard Class Library project in Visual Studio.
  2. Using NuGet, add the packages SpecFlow, xunit and xunit.extensions with either the Package Manager Console or Manage NuGet Packages dialog.consoledialog xunit.extensions is required for some of the more advanced Gherkin syntax scenarios.
  3. With this in place, SpecFlow will still generate nUnit tests. To configure it to use xUnit tests, create an app.config file in Class Library project with the following content:
    <?xml version="1.0" encoding="utf-8"?>
        <section name="specFlow" 
    type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler, TechTalk.SpecFlow" /> </configSections> <specFlow> <unitTestProvider name="xUnit" /> </specFlow> </configuration>

That’s it, SpecFlow will now generate xUnit tests which should compile perfectly.

I’ve wrapped all of these steps together into a NuGet package, tentatively titled SpecFlow.xUnit, and contributed it to the SpecFlow project on GitHub. They have just accepted my pull request, so hopefully we’ll see a one click package to integrate these two great libraries on very soon!

Of course, the real beauty of SpecFlow is its Visual Studio integration, which you can install from the Visual Studio Extension Manager. It provides item templates and step debugging on Gherkin .feature files.


I have a few issues with the way that Visual Studio’s Extension Manager works, but that’s another post and has nothing to do with SpecFlow.

– By


JSON with Padding (JSONP) is a slight “hack” to allow for cross origin ajax requests. Instead of making requests the standard ajax way, leveraging XMLHttpRequest, JSONP techniques make requests by writing a <script> tag into the DOM with the src attribute set to the appropriate URL. Because browsers do not limit the origin of scripts loaded via the <script> tag, data, via JavaScript, can be loaded from any of the hundreds of API’s and endpoints that support JSONP.

Consuming JSONP services is ridiculously easy and has had support built into jQuery since version 1.2.

$(function () {
        dataType: 'jsonp',
        url: '',
        success: function (data) {

In this case, the data that is returned from the server is logged to the browser console, but any JavaScript could be executed.

Enabling JSONP on top of an existing JSON endpoint (server side) is fairly simple. It requires changing the HTTP response content type, and wrapping the resultant JSON string in a JavaScript function call (the padding part).

Unfortunately, ASP.NET MVC does not ship with JSONP support out of the box, but it is easy to add with the great extensibility points available in MVC. Here is an implementation of an MVC ActionResult which would enable any controller to begin to return proper JSONP results.

using System;
using System.Text;
using System.Web;
using System.Web.Mvc;
using System.Web.Script.Serialization;

namespace NikCodes.ActionResults
    public class JsonpResult : ActionResult
        public string CallbackFunction { get; set; }
        public Encoding ContentEncoding { get; set; }
        public string ContentType { get; set; }
        public object Data { get; set; }

        public JsonpResult(object data):this(data, null){}
        public JsonpResult(object data, string callbackFunction)
            Data = data;
            CallbackFunction = callbackFunction;

        public override void ExecuteResult(ControllerContext context)
            if (context == null) throw new ArgumentNullException("context");

            HttpResponseBase response = context.HttpContext.Response;

            response.ContentType = string.IsNullOrEmpty(ContentType) ? "application/x-javascript" : ContentType;

            if (ContentEncoding != null) response.ContentEncoding = ContentEncoding;

            if (Data != null)
                HttpRequestBase request = context.HttpContext.Request;

                var callback = CallbackFunction ?? request.Params["callback"] ?? "callback";

#pragma warning disable 0618 // JavaScriptSerializer is no longer obsolete
                var serializer = new JavaScriptSerializer();
                response.Write(string.Format("{0}({1});", callback, serializer.Serialize(Data)));
#pragma warning restore 0618

This action result simply takes in an object, serializes it using the standard MVC serialization method and wraps it up in a callback function. It supports jQuery’s JSONP implementation transparently. Using it couldn’t be simpler:

public ActionResult ActionMethod()
    var data = GetDataSomehow();
    return new JsonpResult(data);
    //constructor overload for overriding callback function
    //return new JsonpResult(data, "callback function"); 

Of course, cross origin resource sharing (CORS) is much more powerful that JSONP, and will ultimately replace it – but until then, or as long as we have < IE 10, we will still have to leverage this technique.

Download the code, try it out. Let me know if this is valuable enough to put on NuGet.

– By

15 Seconds of Fame

Big thanks goes out to Carl Franklin and everyone at Pwop Productions!

They let me hang out at the studio a little but late Wednesday night – and even had me on the Dot Net Rocks podcast as a special guest for a few minutes!  Check out episode 362, and all of Carl’s great content!

I’d also like to thank the MossMan, Randy Drisgill himself – cause, well he knows what he did…

– By

FireFox, meet MSDN

I find myself using FireFox all the time.  I never thought I’d leave IE, but since I develop web applications almost all day long, FireFox’s extensions are invaluable to me.

IE does have a few extensions, but they come nowhere close to FireBug

Anyway’s, since I’m in FireFox, doing web development, I would like to look up web development documentation right in FireFox.

This led me to create my first browser extension – the MSDN Search plugin for FireFox! 


It works exactly how you’d think it would.  It is almost always faster than opening up the .Net or WSS SDK documentation as well.  Please give it a try and let me know what you think.

To use it simply extract and place the two files in your searchplugins folder, usually located at: C:\Program Files\Mozilla Firefox\searchplugins

Once you restart FireFox, MSDN should be available from your search box. 

For those of you who don’t find yourself using the search box that often, here are a few tips to get you up to speed:

  • Ctrl + E selects the search box
  • Ctrl +Up or Ctrl + Down cycles through all your installed search engines
  • Alt + Enter will open up the search results within a new tab

Leave feedback in the comments!


– By

Actively refused connections

Recently I was using the System.Net.Mail namespace to send out email messages from a simple .net application.  I was receiving an error which boiled down to “the server is actively refusing the connection.”

This message is not particularly helpful.  If you run into this issue you might want to check that your mail server will allow SMTP relays.  My server was set to allow SMTP relays – yet I was still getting the issue.

I found a Microsoft tool called SMTPDiag (for use with Exchange server) that runs through the network performing various test to find out why SMTP is not working.

I ended up failing on one of the final steps – connecting to the exchange server.

I ran

netstat -n -a

on the server and found that port 25 was open and listening.

Then on my client machine I tried to create the simplest connection I could to the server:

telnet [mailServer] 25

This refused to connect as well. 

I checked the firewall to see if port 25 was blocked in some way and it was not.

I was basically flabbergasted and about to give up when my anti-virus program popped up to do an update.  A light bulb went off in my head.


McAffee ANtiVirus was blocking all outgoing traffic over port 25 on my machine.  I didn’t even think about anti-virus blocking ports, but it is quite common, especially with mail ports.  I felt like an idiot, but I decided to share my experience to help others out there.

PS – SMTPDiag is quite useful, and Marc Grote has a good tutorial on it.

Post Navigation