Building, deploying and testing an ASP.NET Core application in a Docker container on Linux using the TFS2015 task-based build system

As you may have noticed, Microsoft has been following a new strategy lately when it comes to Open Source Software and cross-platform (or as the cool kids say: “xplat”) software development. One of the results of this new strategy is a new version of ASP.NET called ASP.NET Core. This version of ASP.NET is not just the successor of the ASP.NET platform we all know and love (or hate) and only runs on Windows. No, ASP.NET Core has been built from scratch and is based on .NET Core. This means that is runs on Windows, as well as Linux and OS X using the new .NET Execution Environment (or DNX). See https://docs.asp.net/en/latest/ for more information.

In this blogpost I will show you how you can use the new Microsoft task-based build system in Visual Studio Team Services (or TFS 2015 on-premise) to build and test an ASP.NET Core application that runs on DNX inside a Docker container on a Linux machine.

Note: DNX is about to be retired and replaced with the .NET command-line interface or .NET CLI for short (see https://github.com/dotnet/cli). But for now, the ASP.NET Core will run on DNX.

The sample application

I’ve created a simple WebAPI based Microservice for handling user-profiles. The service is read-only (get all profiles and get by id) and just returns some static data. I’ve created the application by starting with the standard ASP.NET WebAPI template offered by the Yeoman scaffolding framework. I started Yeoman using the ASP.NET generator (yo aspnet) and selected Web API Application:

It asked me for a name for the project and cranked out a complete ASP.NET Core project structure including a sample API controller. After that I fired up Visual Studio Code and added a simple class to hold some user-profile data:

namespace ProfileService.Models
{
    public class Profile
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Username { get; set; }
        public string GravatarUrl { get; set; }
    }
}

I removed the sample ValuesController generated by Yeoman from the Controllers folder, created a new ProfilesController using Yeoman (yo aspnet:WebApiController ProfilesController) and added a rudimentary implementation that returns some static user-profile data:

using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNet.Mvc;
using ProfileService.Models;

namespace ProfileService.Controllers
{
    [Route("api/[controller]")]
    public class ProfilesController : Controller
    {
        // GET: api/values
        [HttpGet]
        public Profile[] Get()
        {
            List<Profile> profiles = new List<Profile>();
            profiles.Add(new Profile
            {
               Id = 1,
               Name = "Edwin van Wijk",
               Username = "edwinw@infosupport.com",
               GravatarUrl = "https://nl.gravatar.com/edwinvanwijk"
            });

            profiles.Add(new Profile
            {
               Id = 2,
               Name = "John Doe",
               Username = "john@doe.com",
               GravatarUrl = null
            });

            profiles.Add(new Profile
            {
               Id = 3,
               Name = "Jane Doe",
               Username = "jane@doe.com",
               GravatarUrl = null
            });

            return profiles.ToArray();
        }

        // GET api/profiles/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            Profile profile = Get()
                .Where(p => p.Id == id)
                .FirstOrDefault();

            if (profile == null)
            {
                return HttpNotFound();
            }

            return new ObjectResult(profile);
        }
    }
}

That’s all I need for now.

Taking the application for a first test-drive

In order to make sure that the application runs on Windows as wel as Linux and OS X, I changed the project.json file. I removed all frameworks except dnxcore50 and saved the file.

After that I went back to the command-line and made sure I was situated in the project folder. I used the dnvm tool to make sure the CoreCLR is used (dnvm use 1.0.0-rc1-final -r coreclr). I used the dnu restore command to download all the necessary dependencies. And now the moment of truth, turning the key. By entering the command dnx web, the Kestrel webserver was started and the API was up and running on port 5000:

Cross-over to the dark side

Now that I have my ASP.NET Core application running on Windows, I would like to also run it on Linux. So I created an Ubuntu Linux VM in Azure and installed .NET Core as described in http://docs.asp.net/en/latest/getting-started/installing-on-linux.html. After that I installed Git and cloned the Git repo that holds the sample WebAPI application. After a dnu restore to download all dependencies, I started the application with dnx web and the output told me the application was up and running:

To test the API I started a second session on the Linux host and used curl to succesfully call the API:

Adding unittests

Now that I had my application up and running, I wanted to add unittests before adding new functionality. I wanted to use xUnit as unittest framework and to my delight I found there’s already a version available for the CoreCLR. Also the xUnit runner is available for CoreCLR. In order to use xUnit as my unittest framework, I added the necessary libraries as dependencies in my project.json and did a dnu restore:

"frameworks": {
    "dnxcore50": {
        "dependencies": {
            "xunit": "2.1.0-*",
            "xunit.runner.dnx": "2.1.0-*"
         }
    }
}

After that I added a new class to my project and created a simple unittest:

using Microsoft.AspNet.Mvc;
using ProfileService.Controllers;
using ProfileService.Models;
using Xunit;

namespace ProfileService.UnitTests
{
    public class ProfilesControllerTests
    {
        [Fact]
        public void Test1()
        {
            // arrange
            var sut = new ProfilesController();

            // act
            Profile actual = ((ObjectResult)sut.Get(2)).Value as Profile;

            // assert
            Assert.Equal("John Doe", actual.Name);
        }
    }
}

In order to run unittests, I’ve added a test command to my project.json that starts the xUnit testrunner:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel",
    "test": "xunit.runner.dnx"
}

Now I’m able to run the unittests in the project by running dnx test on the commandline:

Deploying the app in a container

Now that I was able to run and test the application on Linux, I wanted to be able to deploy my application in a Docker container. For this I altered the Dockerfile that was generated by the Yeoman ASP.NET generator to make it look like this:

FROM microsoft/aspnet:1.0.0-rc1-update1-coreclr

COPY . /app
WORKDIR /app
RUN ["dnu", "restore"]

EXPOSE 5000/tcp
ENTRYPOINT ["dnx"]

An interesting thing to mention is that the Dockerfile is based on an ASP.NET image from a Microsoft repo (specified using FROM). So Docker images are available that have the ASP.NET Core stuff already installed. Nice! The other commands in the Dockerfile subsequently execute the following steps:

  • copy all files in the project folder to the /app folder in the Docker container,
  • set the working folder (default folder after the container is started) to /app,
  • run the command dnu restore to get all the dependencies the application needs,
  • expose port 5000 for communication with the Docker container,
  • specify that when the container starts, dnx is started.

By specifying dnx without arguments as entry-point in the Dockerfile, I will be able to specify which command I want to run as the container starts (web or test).

To test the Dockerfile file, I used the Docker build command to build an image using this Dockerfile:

sudo docker build -t edwinw/profileservice:test .

This command creates a new Docker image by executing all commands in the Dockerfile in the current working-folder and tags the created image with a name and a tag (name:tag), in this case edwinw/profileservice:test. Everything worked like a charm and I ended up with a new image:

Note: dnu restore can take a long time or even time-out when executed on a Linux box for the first time. This is mainly because of the amount of packages that need to be downloaded the first time. To prevent this, you could run a new container based on your docker image in interactive mode, do a dnu restore(make sure it completes successfully) and commit this container to a new image (using docker commit). You can then use this image as your base image (by specifying it in your Dockerfile using the FROM statement). This way, the packages are already present in the image and running dnu restore only takes seconds.

So my Dockerfile was working. Now I wanted to be able to start the container and run my application. So I issued the following command:

sudo docker run -p 5000:5000 edwinw/profileservice:test web

The -p flag is to specify which TCP port the Docker container should publish to the host. Because I changed the entry-point in the Dockerfile, I’m able to specify the dnx command (web) on the command-line. After a little while, the container was started and I saw the logging indicating that the web-server was running on http://localhost:5000. Awesome, I’m running my ASP.NET Core WebAPI inside a Docker container! Time to test the API. I used curl to hit the WebAPI with a request:

curl http://localhost:5000/api/profiles

While I expected some JSON containing Profile info to appear, curl told me it received an empty result from the service. Bummer, what’s wrong? After some Google queries, I found out that you need to bind the Kestrel webserver inside a Docker container to all network interfaces in order to make it accessible from outside the container. So I added the –server.urls argument the web command in the project.json file to make this happen:

"commands": {
    "web": "Microsoft.AspNet.Server.Kestrel --server.urls http://*:5000",
    "test": "xunit.runner.dnx"
}

After rebuilding and starting the container, I could access the API using curl and received the Profile data.

But what about running my unittests? I started another container using the following command:

sudo docker run --rm edwinw/profileservice:test test

This also worked like a charm and the output of the testrunner was printed to the console. The –rm flag forces the container that is started to be deleted automatically after the container is stopped (which is fine for test containers).

Now I needed a convenient way to build the Docker container, start it and run the tests inside the container. Also, this should be easy to use from an automated build. So I created a bash script called Test.bash in the project folder:

#!/bin/bash

IMGNAME=edwinw/profileservice

# build docker image
sudo docker build -t $IMGNAME:$BUILD_BUILDNUMBER .

# test docker image
sudo docker run --rm $IMGNAME:$BUILD_BUILDNUMBER test

In this script I’ve chosen to use the environment variable BUILD_BUILDNUMBER (which is automatically filled by the xplat build agent when it executes a build), to create a unique image tag with every build.

Finally I pushed the Test.bash script to the Git repo so it is available to be used from the build.

Creating the build

Now I got my app up and running in a Docker container, I wanted to be able to automatically build my Docker container, deploy my app to it and run my unittests from a VSTS build. First of all I needed to run a xplat build agent on my Linux box. I created a new Agent queue called Linux in my Visual Studio Team Services account and installed, configured and started the xplat build agent on the Linux machine (as described in https://github.com/Microsoft/vso-agent/blob/master/docs/vsts.md). I already created alternate credentials for my edwinvw account so I could use these for logging into VSTS and run the build agent:

After starting the Agent, it automatically became visible in VSTS:

Now my build agent was up and running, I created a new build definition. First I added a Shell Script task. In this task I selected the Test.bash shell script that I created and pushed to the Git repo earlier. Because the project is not directly in the root of the Git repo but in a subdirectory called ProfileService, I also specified this as the working directory to make sure the bash script is executed in the project folder:

Time for a test run. I queued a new build and watched the logging appear in the output window in the VSTS web-interface (I still get a warm feeling inside every time I do that … sure, I’m old skool). I saw that after the Docker container was created and started, the test ran successfully:

So far so good. I felt pretty good about how fast I had this stuff up and running. But there was something missing. I couldn’t see the results of the unittest run that was executed on the summary page of the build:

As it turns out, the xUnit runner has a command-line flag -xml that can be used to specify a filename that will be used by the xUnit runner to dump the test results (in xUnit specific format). And I know the collection of standard build tasks in VSTS contains a Publish Test Results task that supports xUnit test results. Easy … but wait a minute: the tests are executed inside a Docker container to which I have no direct filesystem access from the build agent.So how can I get my hands on the test results file that’s created inside the Docker container?

Publishing the test results

Fortunately Docker supports mounting a folder on the Docker host machine to a folder inside the Docker container (using the -v (volume) flag). Every file that is created in this folder inside the Docker container, bypasses the Docker filesystem and is put directly into the corresponding folder on the Docker host.

So I changed the Test.bash script so that when executing the Docker run command, the /tmp/TestResultsfolder on the Docker host is mounted as the /TestResults folder inside the Docker container using the following additional argument:

-v /tmp/TestResults:/TestResults

Now I needed to make the xUnit runner place the test results file into the /TestResults folder in the Docker container so that the build agent can pick-up this file from the /tmp/TestResults folder on the Docker host. To make this happen, all I had to do was add the -xml flag to the command-line of the testrunner and specify a filename in the /TestResults folder:

-xml /TestResults/TestResults-$BUILD_BUILDNUMBER.xml

To make sure the build agent can pick-up the test results file, I also added a command to the script to set read permissions for everyone on the test results file. So the final Test.bash script looks like this:

#!/bin/bash

IMGNAME=edwinw/profileservice

# build docker image
sudo docker build -t $IMGNAME:$BUILD_BUILDNUMBER .

# create testresults folder
mkdir -p /tmp/TestResults

# test docker image
sudo docker run --rm --env ASPNET_ENV=Test -v /tmp/TestResults:/TestResults $IMGNAME:$BUILD_BUILDNUMBER test -xml /TestResults/TestResults-$BUILD_BUILDNUMBER.xml 

# set rights of testresults file (for publication)
sudo chmod 644 /tmp/TestResults/TestResults-$BUILD_BUILDNUMBER.xml

After starting another build, I found a TestResults-46.xml file in the /tmp/TestResults folder. To make sure these results would automatically be picked up and published during the build, I added a Publish Test Resultstask to the build definition and pointed it to the test results file in the /tmp/TestResults folder:

Notice that I use the standard build variable $(Build.BuildNumber) to specify the specific test results file for the running build.

After running another build, the test results were visible on the build summary page:

Wrapping it up

As stated before, I was pretty pleased with how easy it was to set this stuff up and how flexible the new task-based build system in VSTS / TFS has turned out. Also the possibility to run xplat builds on Linux or OS X and integrate Docker into the build pipeline is very powerful.

So call to action to all you Microsofties out there: join the dark side and start experimenting with this stuff. And once Windows containers are ‘out’, we can also start using Docker containers for deploying .NET applications that run on the full .NET stack.

I hope you find this post useful. Because the ecosystem surrounding xplat builds and ASP.NET Core is evolving rapidly (i.e. the change from DNX to .NET CLI), I’m not sure how long the links I used in this post will stay up-to-date. Please leave a comment if you encounter a broken link.