Today I had some 20 minutes spare time and I wanted to try games my co-workers had recommended for me.


First off was Heartstone - Heroes of warcraft. I had opened the game day before, and was in the middle of the tutorial story. Opening the game again got me slowly to the same tutorial spot… except when I clicked “Play” I got this:

Heartstone maintenance break.
Smells like an american engineer.

Three hour maintenance break is looong, but I can live with that; there might be a valid reason (like earthquake or tsunami or blown up data center). Instead, what irritated me, was that

  • the game did not tell me from the get-go that service is closed. Made me wait extra 30 s.
  • times are not in local format; it should not come as a surprise to anyone that AM and PM are not universal notations.
  • I’m required to know both current time, and my time difference to PST time.

The last one is the worst: it puts extra cognitive load on the player. Personally I never figured out when the maintenance break ends - I would have needed to open a timezone application or google timezones to figure that out.

Instead of the above message I would like to see maintenance break expressed as duration, e.g.

We are back online in 1 hour.

Vain glory

As I never got to play Heartstone, I took the next game in queue: Vain Glory. This game I had installed, but never opened. The first user experience I got when opening was this:

Vain Glory downloading asset file.
Some game jargon for you.

First of all: why does the load take 15 minutes when I’m on a 100 Mb network? Very few people are willing to wait this long for a game session. Normally 500 MB of content downloads fast, but this game is different.

Secondly, what are the random numbers on the lower right of the modal window: no units, no formatting, just random. There already is a very clear percentage and the progress bar, why add extra clutter?

And last: does “Asset File” mean something for normal people? In my opinion that is pure game development jargon and should not be shown to players. Most people don’t even know that there are real files loaded in the background.

After all the waiting - just when I thought I would get to play - the game disappointed me with second obligatory wait! And of course with another completely meaningless message: “Installing Data”:

Vain Glory installing data.
Installing what?

I have the most powerful Android device ever made - the Nexus 6P - and this phase still took 10 minutes. And I have no idea what the game did for all that time; maybe it calculated prime numbers or did some crowdsourced computing for public good? If there is this heavy computing involved, why isn’t it pre-computed on the game backend?

As a player I would like to see a maximum of one wait with clear progress bar. It should take max minute or two on a fast network, and have an uplifting message made for humans. I’m not an english native, but something along the lines of

We are loading some high quality content for you and it will take a while, but it will be awesome!

Alternatively no waiting at all for first time experience, and downloads could continue in the background or when player advances in the game.


In the end my 20 minute of time was spent and I did not get to play for a second. Game studios did not get a dime from me, and my time was wasted. Loose-loose.

I use VS Team Services for some of my repositories. I use the Git repository type, and most of the time everything works fine. Until today I reorganized some of my Go code to be more idiomatic, meaning that I leverage the native Go package system as it is designed: the only way to reference packages is by their full repository URL:

package main

using (

and you can also load packages with Go get command, like:

go get

I can (barely) live with the fact that VS Team Services adds the unnecessarily ugly “DefaultCollection” and “_git” in to the URL. But the real problem is that the above doesn’t work with go cmd, you just get very misguiding message:

package unrecognized import path “”

Adding the verbose flag (-v) to the command gives one extra tidbit of information:

Parsing meta tags from (status code 203) import “”: parsing http: read on closed response body

My first guess was an authentication issue, and making a curl request for the address supported my guess as it was an authentication redirect. But Go uses Git internally, and git clone worked. I couldn’t find this issue anywhere related to VS Team Services (maybe gophers don’t use it?), but I found a same issue for Gitlab. It turned out, that neither Gitlab or VS Team Services adds automatically the .git ending to the URL, and therefore go get command never reaches the actual repository. This is fixed in Gitlab since, but issue remains in VS Team Services. The fix is to make the URL even uglier:

package main

using (

and the corresponding go get command:

go get

I hope this helps someone else to fix the problem quicker.

One of Basware’s products I work for uses CDN to deliver content for end users. CDN provider is Edgecast, and primary & secondary origins are Azure Blob storage accounts. So far we have not needed any cross domain access to the CDN, but now a new feature required Javascript requests from our application domain to the CDN domain… and browsers naturally block this nowadays.

I knew right away that I need to set the Cross Origin Resource Sharing (CORS) headers to our origin servers, but setting this up was harder than it is supposed to be: Azure’s Powershell SDK does not have support to alter this value, and there is no UI to set it in the management portal. There is of course the management REST API you can use to do anything, but calling it with curl is hard due to the authentication scheme. Setting the DefaultServiceVersion property proved to be as complex before, so I knew what to expect.

I checked Github and there were a couple of projects that matched my problem. Still I found none of them immediately useful; this kind of tool that you use only once should have a very low barrier of entry: git clone and run. So I decided to try to create one myself. With some help from blog posts like Bill Wilders post on the subject I was able to create a working version in an hour. My tech stack for this is ScriptCS, as it supports Nuget references out of the box. I referenced the WindowsAzure.Storage package that had the methods I needed.

The end result is a tool that (given you have ScriptCS installed) you can just clone and run - ScriptCS takes care of the package restoration automatically. Tool supports dumping current values to console, adding CORS rules, and clearing rules. And the syntax is easy enough for anyone to use:

scriptcs addCors.csx -- [storageAccount] [storageAccountKey] [origins]

ScriptCS runs also on Mono, so you could even say this is cross platform. Not as good as Node or Go based solution would have been, but still good enough.

Naming is the hardest part… this tool turned out to be just “AzureCorsSetter”.

At the beginning of October Microsoft Finland held the yearly developer conference, this time with name Devdays. This year’s conference felt slightly smaller than previously.

As there is is lots of churn around the ASP.NET right now and I have a history with that framework, I proposed a presentation about ASP.NET vNext. Gladly it got accepted, and I had to dig deeper into what’s coming from the ASP.NET team. I played with the framework, watched every video and and read every blogpost about it from Scott Hanselman, David Fowler and others. I also prepared some demos, even a Linux demo which I had to scrap on last minute because I had only 45 minutes time to present. I tried to give the audience some guidelines how they can prepare to what’s coming, in order for the upgrade from current ASP.NET to be as easy as possible. It was nice to prepare and present, I hope it helped someone.

P.s. I waited for Microsoft to release the video recordings before posting this, but still after two months there is only a couple of videos available, and they are on strangely named Youtube channel, different from previous years. I do not know what happened as I have not seen any communication from MS about the recordings, and I have yet received no answer to my question. So I have to say this aspect of the conference was poorly executed this year. Also, there was no common feedback collection, which means that presenters did not get any proper feedback. For me it is important to see the video and get some feedback to be able to do better next year.

I changed some of my websites deployment to use different deployment slots on a single Azure web site instead of having different web sites for different staging areas. I deploy all my staging areas automatically from TFS (using the GitContinuousDeploymentTemplate.12.xaml process), each area from different Git branch. Works for my setup.

What did not work was deploying to other slots than the main slot. On Azure portal different slots have name and address scheme like mywebsite-slotname. I tried to use this name as deployment target:

Failing configuration.
Failing configuration.

…and got failed build with error like:

An attempted http request against URI returned an error: (404) Not Found.

So clearly mywebsite-slotname is not the correct scheme. And there is no documentation available, thus this blog post.

I went on and downloaded publishing profile for the site slot. It had double underscore naming mywebsite__slotname, but that did not work either. Nor did single underscore. What finally worked, was the name the old Azure portal used: mywebsite(slotname). This is how my build process deployment target looks now, and deployment to the slot works.

Working configuration.
Working configuration.

I hope this gets better documented. Luckily one can create pull request for Azure documentation nowadays; I might document this myself.