By default ASP.NET core API methods operate on JSON: they deserialize JSON from request/response body to model type and back. JSON is everywhere and works well… unless you have very high throughput requirements. There are many alternative formats, but Google’s serialization format Protocol Buffers is one of the most used. It has overgone some changes recently: the old proto2 syntax is replaced with proto3. The latter even has an official C# support.

The old proto2 used to have unofficial C# ports, and many ASP.NET MVC samples on the internet are based on those. I couldn’t find a working proto3 version, so I created my own.

To create custom input and output types for ASP.NET two interfaces need to be fullfilled: IInputFormatter and IOutputFormatter. The easiest way to do this is to inherit from InputFormatter and OutputFormatter base classes. Basically ASP.NET MVC tells the content type, content and desired target type, and then custom formatter needs to act on those.

Naturally this all needs to work for all possible types, otherwise the formatters would not be reusable. Proto3 has some strangeness in its APIs, like some of the useful constructors being internal. Luckily with some source code reading one can find the method that does the real work when the actual type is not known on compile time: IMessage.MergeFrom(). Working Input and output formatters are below:

// The input formatter reading request body and mapping it to given data object.
public class ProtobufInputFormatter : InputFormatter
    static MediaTypeHeaderValue protoMediaType = MediaTypeHeaderValue.Parse("application/x-protobuf");

    public override bool CanRead(InputFormatterContext context)
        var request = context.HttpContext.Request;
        MediaTypeHeaderValue requestContentType = null;
        MediaTypeHeaderValue.TryParse(request.ContentType, out requestContentType);

        if (requestContentType == null)
            return false;

        return requestContentType.IsSubsetOf(protoMediaType);

    public override Task<InputFormatterResult> ReadRequestBodyAsync(InputFormatterContext context)
            var request = context.HttpContext.Request;
            var obj = (IMessage)Activator.CreateInstance(context.ModelType);

            return InputFormatterResult.SuccessAsync(obj);
        catch (Exception ex)
            Console.WriteLine("Exception: " + ex);
            return InputFormatterResult.FailureAsync();

// The output object mapping returned object to Protobuf-serialized response body.
public class ProtobufOutputFormatter : OutputFormatter
    static MediaTypeHeaderValue protoMediaType = MediaTypeHeaderValue.Parse("application/x-protobuf");

    public override bool CanWriteResult(OutputFormatterCanWriteContext context)
        if (context.Object == null || !context.ContentType.IsSubsetOf(protoMediaType))
            return false;

        // Check whether the given object is a proto-generated object
        return context.ObjectType.GetTypeInfo()
            .Where(i => i.GetTypeInfo().IsGenericType)
            .Any(i => i.GetGenericTypeDefinition() == typeof(IMessage<>));

    public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context)
        var response = context.HttpContext.Response;

        // Proto-encode
        var protoObj = context.Object as IMessage;
        var serialized = protoObj.ToByteArray();

        return response.Body.WriteAsync(serialized, 0, serialized.Length);

Formatters need to be registered for ASP.NET MVC to use them. This can be done in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
    services.Configure<MvcOptions>(options => {
        options.InputFormatters.Add(new ProtobufInputFormatter());
        options.OutputFormatters.Add(new ProtobufOutputFormatter());

And that’s it, now you can control the desired format with requests by using either application/json or application/x-protobuf as content and accept types. You can even mix and match: send in JSON but request protobuf back.

Today I had some 20 minutes spare time and I wanted to try games my co-workers had recommended for me.


First off was Heartstone - Heroes of warcraft. I had opened the game day before, and was in the middle of the tutorial story. Opening the game again got me slowly to the same tutorial spot… except when I clicked “Play” I got this:

Heartstone maintenance break.
Smells like an american engineer.

Three hour maintenance break is looong, but I can live with that; there might be a valid reason (like earthquake or tsunami or blown up data center). Instead, what irritated me, was that

  • the game did not tell me from the get-go that service is closed. Made me wait extra 30 s.
  • times are not in local format; it should not come as a surprise to anyone that AM and PM are not universal notations.
  • I’m required to know both current time, and my time difference to PST time.

The last one is the worst: it puts extra cognitive load on the player. Personally I never figured out when the maintenance break ends - I would have needed to open a timezone application or google timezones to figure that out.

Instead of the above message I would like to see maintenance break expressed as duration, e.g.

We are back online in 1 hour.

Vain glory

As I never got to play Heartstone, I took the next game in queue: Vain Glory. This game I had installed, but never opened. The first user experience I got when opening was this:

Vain Glory downloading asset file.
Some game jargon for you.

First of all: why does the load take 15 minutes when I’m on a 100 Mb network? Very few people are willing to wait this long for a game session. Normally 500 MB of content downloads fast, but this game is different.

Secondly, what are the random numbers on the lower right of the modal window: no units, no formatting, just random. There already is a very clear percentage and the progress bar, why add extra clutter?

And last: does “Asset File” mean something for normal people? In my opinion that is pure game development jargon and should not be shown to players. Most people don’t even know that there are real files loaded in the background.

After all the waiting - just when I thought I would get to play - the game disappointed me with second obligatory wait! And of course with another completely meaningless message: “Installing Data”:

Vain Glory installing data.
Installing what?

I have the most powerful Android device ever made - the Nexus 6P - and this phase still took 10 minutes. And I have no idea what the game did for all that time; maybe it calculated prime numbers or did some crowdsourced computing for public good? If there is this heavy computing involved, why isn’t it pre-computed on the game backend?

As a player I would like to see a maximum of one wait with clear progress bar. It should take max minute or two on a fast network, and have an uplifting message made for humans. I’m not an english native, but something along the lines of

We are loading some high quality content for you and it will take a while, but it will be awesome!

Alternatively no waiting at all for first time experience, and downloads could continue in the background or when player advances in the game.


In the end my 20 minute of time was spent and I did not get to play for a second. Game studios did not get a dime from me, and my time was wasted. Loose-loose.

I use VS Team Services for some of my repositories. I use the Git repository type, and most of the time everything works fine. Until today I reorganized some of my Go code to be more idiomatic, meaning that I leverage the native Go package system as it is designed: the only way to reference packages is by their full repository URL:

package main

using (

and you can also load packages with Go get command, like:

go get

I can (barely) live with the fact that VS Team Services adds the unnecessarily ugly “DefaultCollection” and “_git” in to the URL. But the real problem is that the above doesn’t work with go cmd, you just get very misguiding message:

package unrecognized import path “”

Adding the verbose flag (-v) to the command gives one extra tidbit of information:

Parsing meta tags from (status code 203) import “”: parsing http: read on closed response body

My first guess was an authentication issue, and making a curl request for the address supported my guess as it was an authentication redirect. But Go uses Git internally, and git clone worked. I couldn’t find this issue anywhere related to VS Team Services (maybe gophers don’t use it?), but I found a same issue for Gitlab. It turned out, that neither Gitlab or VS Team Services adds automatically the .git ending to the URL, and therefore go get command never reaches the actual repository. This is fixed in Gitlab since, but issue remains in VS Team Services. The fix is to make the URL even uglier:

package main

using (

and the corresponding go get command:

go get

I hope this helps someone else to fix the problem quicker.

One of Basware’s products I work for uses CDN to deliver content for end users. CDN provider is Edgecast, and primary & secondary origins are Azure Blob storage accounts. So far we have not needed any cross domain access to the CDN, but now a new feature required Javascript requests from our application domain to the CDN domain… and browsers naturally block this nowadays.

I knew right away that I need to set the Cross Origin Resource Sharing (CORS) headers to our origin servers, but setting this up was harder than it is supposed to be: Azure’s Powershell SDK does not have support to alter this value, and there is no UI to set it in the management portal. There is of course the management REST API you can use to do anything, but calling it with curl is hard due to the authentication scheme. Setting the DefaultServiceVersion property proved to be as complex before, so I knew what to expect.

I checked Github and there were a couple of projects that matched my problem. Still I found none of them immediately useful; this kind of tool that you use only once should have a very low barrier of entry: git clone and run. So I decided to try to create one myself. With some help from blog posts like Bill Wilders post on the subject I was able to create a working version in an hour. My tech stack for this is ScriptCS, as it supports Nuget references out of the box. I referenced the WindowsAzure.Storage package that had the methods I needed.

The end result is a tool that (given you have ScriptCS installed) you can just clone and run - ScriptCS takes care of the package restoration automatically. Tool supports dumping current values to console, adding CORS rules, and clearing rules. And the syntax is easy enough for anyone to use:

scriptcs addCors.csx -- [storageAccount] [storageAccountKey] [origins]

ScriptCS runs also on Mono, so you could even say this is cross platform. Not as good as Node or Go based solution would have been, but still good enough.

Naming is the hardest part… this tool turned out to be just “AzureCorsSetter”.

At the beginning of October Microsoft Finland held the yearly developer conference, this time with name Devdays. This year’s conference felt slightly smaller than previously.

As there is is lots of churn around the ASP.NET right now and I have a history with that framework, I proposed a presentation about ASP.NET vNext. Gladly it got accepted, and I had to dig deeper into what’s coming from the ASP.NET team. I played with the framework, watched every video and and read every blogpost about it from Scott Hanselman, David Fowler and others. I also prepared some demos, even a Linux demo which I had to scrap on last minute because I had only 45 minutes time to present. I tried to give the audience some guidelines how they can prepare to what’s coming, in order for the upgrade from current ASP.NET to be as easy as possible. It was nice to prepare and present, I hope it helped someone.

P.s. I waited for Microsoft to release the video recordings before posting this, but still after two months there is only a couple of videos available, and they are on strangely named Youtube channel, different from previous years. I do not know what happened as I have not seen any communication from MS about the recordings, and I have yet received no answer to my question. So I have to say this aspect of the conference was poorly executed this year. Also, there was no common feedback collection, which means that presenters did not get any proper feedback. For me it is important to see the video and get some feedback to be able to do better next year.