I visited my local pharmacy last Friday to get some prescription drug. I sat in front of the pharmacist, who gave me very thorough guidance about the usage. At the time of payment - before she handed me the boxes - she suddenly said:

Excuse me, but I need to get a signature from someone else, as I am a trainee.

Immediately another pharmacist came by, put her smart card into the computer, checked the boxes against the electronic prescription, and then signed the delivery. The whole process took maybe 20 seconds, and I’m certain trainee felt safer as her work is checked against mistakes. And myself - as a client - I felt like they care.

With the risk of sounding like Uncle Bob: this episode reminded me about how one should act if you really care about quality. This model of signing other people’s work is built into some software development processes, like the Github flow. If you are not using pull requests, you can still simulate this kind of apprenticeship model with strict use of code reviews before merging into your trunk. Remember, that this must not be for all your codebase; you can be very strict on important, core modules, and let others evolve freely.

OWIN stands for the “Open web interface for .NET”. Basically it is a reasonably simple specification that defines how data goes through the request pipeline, and how to attach to that pipeline. It is a specification for both the server and the application (middleware on OWIN’s terms) part.

When I first saw the project I was not that convinced, but since then lots of applications that rely on OWIN and not the old System.Web stack has emerged, and also there are some hosting components that implement the spec. SignalR is a good example of middleware, and Katana a host. For me the new project Helios is also very interesting and I hope the project succeeds, as that would make hosting ASP.NET WebApi very lightweight. And light is never bad.

So the ecosystem has matured, then what? What really made me to support OWIN is what it did to my codebase. In one of my pet projects I use claims based authentication and authorization with Windows Azure Access Control Services; That is a great project (although being replaced with Azure AD), but on the .NET MVC application side it has been a pain to integrate. The amount of web.config carbage it needs is huge, and I have broken it multiple times. Luckily Microsoft released some OWIN-based implementation of the server side components, and promised drastically simplified configuration model. You just register th middleware to OWIN, specify where to find the metadata, and give the application identifier:

public void ConfigureAuth(IAppBuilder app)
{
    app.UseCookieAuthentication(
        new CookieAuthenticationOptions
        {
            AuthenticationType = 
               WsFederationAuthenticationDefaults.AuthenticationType
        });

    app.UseWsFederationAuthentication(new WsFederationAuthenticationOptions
        {
            MetadataAddress = "https://login.windows.net/some-azure-ad.onmicrosoft.com/federationmetadata/2007-06/federationmetadata.xml",
            Wtrealm = "http://myapps/somerealm",
        });
}

For me simplified configuration was not the only benefit: OWIN registration also gave me an option to register everything authentication related at the same place, which makes the code very readable. Before OWIN I had :

  • Various XML configurations for WS Federation registration
  • Custom ClaimsAuthenticationManager to do in-the-app claims transformation (look for database for some extra information and include that in claims)
  • Account controller to handle sign in and sign out actions
  • Handler to add user’s roles to all outgoing request for better usability (hide client side elements on single page application based on user’s role)

Now I have instead something along these lines:

public void ConfigureAuth(IAppBuilder app)
{
	app.SetDefaultSignInAsAuthenticationType(WsFederationAuthenticationDefaults.AuthenticationType);
	app.UseCookieAuthentication(
		 new CookieAuthenticationOptions
		 {
			 AuthenticationType = WsFederationAuthenticationDefaults.AuthenticationType,

			 // Make claims transformation to avoid using an external 
			 // STS to map certain users to certain role claims
			 Provider = new CookieAuthenticationProvider
			 {
				 OnResponseSignIn = ctx =>
				 {
					 ctx.Identity = TransformClaims(ctx.Identity);
				 }
			 }
		 });

	app.UseWsFederationAuthentication(new WsFederationAuthenticationOptions
	{
		MetadataAddress = ConfigurationManager.AppSettings["medatata"];,
		Wtrealm = ConfigurationManager.AppSettings["realm"];
	});

	// Map sign in action
	app.Map("/signin", map =>
	{
		map.Run(async ctx =>
		{
			if (ctx.Authentication.User == null ||
				!ctx.Authentication.User.Identity.IsAuthenticated)
			{
				ctx.Response.StatusCode = 401;
			}
			else
			{
				ctx.Response.Redirect("/");
			}
		});
	});

	// Map signout action
	app.Map("/signout", map =>
	{
		map.Run(async ctx =>
		{
			ctx.Authentication.SignOut();
			ctx.Response.Redirect("/");
		});
	});
}

private static ClaimsIdentity TransformClaims(ClaimsIdentity identity)
{
	// ... add what ever claims needed based on your own data source
	return identity;
}

Kudos for Dominick Baier for his clear post on this subject that helped me forward with sign in and out actions.

WebApi + OWIN

At the same time I also moved Web API to OWIN based hosting, even though I actually run on IIS. Reasoning was the same than with claims auth: I find the configuration model better.

If you’re an ASP.NET developer, I suggest you start experimenting with the OWIN pipeline. It will pay out.

I have touched this subject already twice: first I I blogged about forcing site rendering to be done with Internet Explorer’s latest engine. Then I faced a situation where separate intranet zone (bad idea, Microsoft!) fallbacks to compatibility mode and does not respect the IE=edge meta tag as internet zone web sites do.

Well… the saga isn’t over, as I faced this situation at work today. Again. I was going to put the IE=11 meta tag in place to force normal mode, but then I started to doubt how older IE’s (9, 10) would interpret the “11” tag. Short answer is: they don’t. Luckily you can specify many different modes, and the browser will pick the first one it supports. To apply this use either a meta tag in your page:

<meta http-equiv="X-UA-Compatible" content="IE=11; IE=10; IE=9; IE=8; IE=7; IE=edge" />

Or apply this IIS configuration to add the correct headers:

<system.webserver>
  <httpProtocol>
    <customHeaders>
      <!-- No need to expose the platform -->
      <remove 
        name="X-Powered-By" />
      <!-- Do not show IE compatibility view -->
      <remove 
        name="X-UA-Compatible"/>
      <add 
        name="X-UA-Compatible" 
        value="IE=11; IE=10; IE=9; IE=8; IE=7; IE=edge"/>
    </customHeaders>
  </httpProtocol>
</system.webserver>

Not nice, but works.

I’ve always been a music fan. Not a die-hard-fan, but one with lots of music and decent equipment. During the era of CD’s, I routinely bought new discs, and they accumulated in hundreds. And then hard disk prices fell and network speeds grew, and I wanted to digitize all I had. This was years ago, maybe around year 2004. I wanted everything in lossless format, as digitizing was slow and I was not going to do it again. “Alternative” media format (.flac, .ogg) support was poor in Windows world, and I chose lossless windows media audio (WMA). I spent many evenings changing the disc on my laptop, and typing album and track names when they were not automatically found from media info databases.

Fast forward almost ten years, and I have to admit I made a wrong decision: lossless WMA support is not on all devices and music servers I need. Plex Media Server is the one I would need the support most, as I use it to serve all our family media to all connected devices. It became evident, that I need a copy of my lossless audio library in some other format. I decided, that this copy can be in lossy format so that it would be easier to copy to offline devices, like my car’s radio system. And as the format needed to be something that was ubiquitously supported, I went for MP3. Luckily I found a blog post by GeoffBa that automated this task in PowerShell. I made some changes so that I can re-run the script to keep lossless and lossy folders synchronized. The script I used is attached below; I have used it about a year already without a glitch. I hope it helps someone else in my situation.

# Convert WMA files to MP3
# Creates new mirrored folder structure
# 
# Adapted from: 
# http://geoffba.blogspot.fi/2011/04/converting-from-wma-to-mp3.html
 
$tool = '"C:\Program Files (x86)\WinFF\ffmpeg.exe"'
$succescounter = $failurecounter = 0
$sourceFolder = 'M:\Media\Music'
$targetFolder = 'M:\Media\MusicMP3'
$failedConversions = New-Object "System.Collections.Generic.List``1[System.String]"

# Start by copying all source files that are already mp3 or flac
echo "Copying all files that are already in correct format"
Invoke-Expression "& robocopy $sourceFolder $targetFolder *.mp3 *.flac *.jpg *.jpeg /e"

# Find .wma files and iterate through the folders recursively
foreach ($child in $(Get-ChildItem $sourceFolder -include *.wma -recurse))
{
    $wmaname = $child.fullname
     
    # Create name for target file.
    # Note that the function is case-sensitive so we handle that first.
    $wmaname = $wmaname.Replace("WMA","wma")
    $mp3name = $wmaname.Replace("wma","mp3")

	# Change target folder
	$mp3name = $mp3name.Replace($sourceFolder,$targetFolder)
	$newFileDirectory = $child.Directory.FullName.Replace($sourceFolder,$targetFolder)
	
	# Do nothing if target file already exists
	if (!(Test-Path -literalpath $mp3name)) 
	{	
		# Create target directory if it does not exist
		if (!(Test-Path -literalpath $newFileDirectory))
		{
			New-Item -ItemType directory -Path $newFileDirectory
		}
	 
		# The argument string that tells ffmpeg what to do...
		$arguments = '-i "' + $wmaname + '" -y -acodec libmp3lame -threads 0 -ab 160k -ac 2 -ar 44100 -map_metadata:g 0:g "' + $mp3name + '"'
		echo ">>>>> Processing: $mp3name"
		Invoke-Expression "& $tool $arguments"
	 
		# Lets see what we just converted, did everything go OK?
		$mp3file = get-item -literalpath $mp3name
	 
		# if conversion went well the mp3 file is larger than 0 bytes
		if ($mp3file.Length -gt 0)
		{
			echo "<<<<< Converted $wmaname"
			$succescounter++
		}
		else
		{
			echo "<<<<<< Failed converting $wmaname"
			Remove-Item $mp3name       
			$failedConversions.Add($wmaName)
			$failurecounter++
		}
	}
}
 
# We are done, so lets inform the user what the succesrate was.
Echo "Processing completed, $succescounter conversions were succesfull and $failurecounter were not."

if ($failureCount -gt 0) 
{
	echo "List of failed files:"
	echo $failedConversions
}

I already blogged about the original platform Orchard I used to host this blog. I needed a new blogging platform, and after all the complexity I wanted something simpler and easier to upgrade.

Lately Github pages and Jekyll have drawn a lot of attention. Tipping point for me was reading what it took for Phil Haack to move his blog over. I tend to test new technology all the time, and decided to take a shot on Jekyll as a blogging platform.

Some people have gotten Jekyll to work ok on Windows. I didn’t. I spent four hours on it, but failed. Ruby was ok, Ruby gems was ok, but installing some gems failed in compilation errors, and the resolutions that worked for some people did not help me. So I installed a new Linux VM just for this purpose. I would not call that experience painless either, but at least now I have a working Github-like local environment.

First up I created a new dummy Jekyll site, and started experimenting. Everything worked like a charm: features are limited if you plan to host on Github pages, but they is enough for my blogging needs. I did not want to go the route of building locally and pushing to gh-pages, I wanted everything to be done at the server side, and only run Jekyll locally if I need to debug.

Reading the convert blogs I thought there would be a very vivid community around Jekyll themes. In the end there wasn’t too many to choose from, as most of the themes needed plugin support, and I had just ruled that out. Themes by Made mistakes drew my attention as they were minimalistic the way I like, and I ended up forking his latest creation HPSTR theme into my own repo. And it’s not just the layout: this theme has Google analytics, social share buttons, Bing and Google site tools, and lots of other stuff included.

After forking the theme it was simply changing settings and creating first posts; I was up and running very fast. I did not get the Jekyll import plugin to work, and decided to convert my post HTML’s mostly manually to Markdown. Took some time, but the only problem I had was with character sets: Github allows only UTF-8 without byte order mark, and Visual Studio wants to save the BOM if you forget to override the default.

The outcome I have right now is a good looking, typographically easily readable and responsive blog layout.

Desktop and mobile views compared.
Layout reacts to different screen sizes. Desktop and mobile views compared.

There are some layout issues that I might tweak when I have time:

  • H1 styles with a desktop browser are massive, which sometimes looks bad with my too very long blog post titles.
  • Background pattern to something else, now I use the default
  • Having the menu only on the hamburger icon dropdown might need a change: I would like to have the top level menu always visible.
  • Favicons: I moved my old, small favicon, but I also need to change the big Apple-specific favicons.

Comments to Disqus

As the site is static, I needed a new home for comments. I would have liked to support Discource, but the theme had already support for Disqus and I opted the easy way. Importing the comments was the hard part: I had to create a Disqusting, Wordpress-compatible import XML to be able to import my comments to Disqus. Created the XML and imported, and now old comments are there and commenting works as expected.

RSS as before

I already had my RSS feed at FeedBurner. Feed usage is in decline, but it does not hurt to have one. I just changed my source address to the Jekyll-generated feed.xml.

What’s missing

I lost site search in conversion: the new theme does not have proper search, and I’m considering creating an option to the theme myself and creating a pull request to the original theme.

I also lost the time-based archives view I had. That is not too big of a loss, as that revealed too easily that I do not write enough blog posts.

I can’t use Windows Live Writer to write posts anymore. That was a good tool, sadly abandoned by Microsoft.

I do not have any WYSIWYG editor, I must just hope that markdown converts into nice HTML view and all images fall into their proper places. This is not a big con, as I use only a limited set of layout styles on my blog posts.

Speed? Upgrades?

With Orchard site speed and platform upgrades were my pain points. So how did this gh-pages + Jekyll adoption change the situation? Completely, I would say:

The site is compiled after every change, and Github serves pretty aggressive cache headers out. As a result my site is very fast. Hosting is pretty far of from Finland and that can be seen in increased latency times, but the negative effect is acceptable.

Github runs the site build and hosting platform, and I do not have to take care any of that. As I forked this blog from the theme repository, I can get fixes to HTML, CSS and some of the JS by fetching from the original theme repository. My biggest risk right now is, that there migth be errors on Github build that I cannot reproduce with my local Jekyll environment, but I’ll take that risk as I can always revert to previous version with simple git commands.