DEV Community

Magnus Stråle
Magnus Stråle

Posted on

.NET Framework to .NET Core to Docker

This is a "lessons learned" from taking an existing .NET Framework-based REST-ful service and getting it to run under .NET Core in a Docker container.

I was fortunate enough to start out with a service built TDD style, meaning that there was good test coverage and an overall nicely designed application with appropriate abstractions. This means I felt fairly confident that if I could just have all tests green, then I had a working solution.

The service is based on Nancy ("Lightweight, low-ceremony, framework for building HTTP based services on .Net") and using TopShelf ("The simplest ways to get started with winservice development") to run the code as a Windows service. Both are ageing projects and TopShelf is simply not applicable on Linux. The decision was made to continue with Nancy and host it under Kestrel. Even if there has been no official Nancy release that supports .NET Core there is a pre-release that supports .NET Standard 2.0.

.NET Standard vs .NET Core vs "old" .NET Framework is something that can easily confuse you. I won't try to explain what is what here, but refer you to this article instead https://www.infoq.com/news/2017/10/dotnet-core-standard-difference which does a nice job of explaining these concepts.

If you want to get your code running on Linux / Docker the basic principle is for your projects to target .NET Standard whenever possible and .NET Core for those cases where .NET Standard is not sufficient. More on this later on.


Getting started

Since the project files has been significantly changed for .NET Core, primarily describing the exceptions from conventions rather than detailing every little thing that should be included in your project, it made sense to simply throw away all existing csproj files and start from scratch with this template:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
    </PropertyGroup>
</Project>
Enter fullscreen mode Exit fullscreen mode

Then you keep adding NuGet and project references until you run out of "[CS0246] The type or namespace name 'Xxx' could not be found (are you missing a using directive or an assembly reference?)". Most packages that I needed were available in .NET Standard versions so it was a fairly painless process (at least compared to what I expected). The two major issues were embedded resources and app settings. Neither of these are really well implemented in old-fashioned .NET Framework and have been significantly reworked (in the case of configuration settings) or simply left unsupported/removed (in the case of Resx files).

Anyway with the csproj files converted and some commented code the whole thing compiled. The really nice part was that the csproj files had significantly shrunk in size. The biggest file of 3756 lines had been reduced to a tiny 38 lines. The main reason for this is conventions and proper wildcard support - this is what killed off 3000+ lines:

<ItemGroup>
  <None Update="TC\TC*\**">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </None>
  <None Update="TCData\**">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </None>
</ItemGroup>
Enter fullscreen mode Exit fullscreen mode

This also makes the csproj files possible to work with as plain text / XML files in the editor rather than just letting them be implicitly updated as a result of interactions in the IDE.

Resources and Resx files

Two of the projects contained embedded resources. This is simply a way to put things that are not code into the assemblies produced by the compiler, such as files, localized text strings, icons etc. In .NET Framework this is handled by the IDE which creates a .resx file with information and auto-generates a Resources class which gives you fully typed access to your resources. This process is mostly magic.

When converting to .NET Core, the Resx file is still there and JetBrains Rider (which I use instead on Visual Studio for the simple reason that it is included in the IntelliJ license that I have) almost makes it work. However this process pulls in old assemblies for reading and parsing the Resx file to figure out how to parse the other embedded resources. After spending a few hours trying to make the service get access to the embedded files (got it working) and still be able to have a nice editing experience from the IDE (got it working, but broke the code) I gave up on Resx and decided to do it by directly accessing the ResourceManager, the class responsible for working with embedded resources. Turns out that it was dead easy, partly because the only type of resource that I needed was files. This code is what I needed instead of trying to struggle with backwards compatibility:

public static class Resources 
{
    private static readonly string _prefix = typeof(Resources).FullName + ".";
    private static readonly Assembly _assembly = typeof(Resources).Assembly;
    public static Stream GetStream(string name)
    {
        return _assembly.GetManifestResourceStream(_prefix + name);
    }
}
Enter fullscreen mode Exit fullscreen mode

Configuration settings

In classic .NET Framework you usually have your app.config or web.config with all the configuration settings for your application. Any custom settings can be put under an section, or you can write your own fully typed config section handler. However under .NET Core the "official" way of doing it is a lot more customizable, you can choose where to pull the configuration settings from (config file, environment variables, command line parameters etc). However I just wanted to get the thing to work with as little work as possible. Looking at the NuGet feeds theer is actually a System.Configuration package that supports .NET Standard 2.0, which wasnt the case originally.

There were some issues with how Owin / Kestrel was configured - it looked like using the new configuration system made for a cleaner interface to Kestrel. Also considering my recent experience with resources so I decidede to switch to the new configuration system. It was almost as easy as the Resource bit.

public static IWebHost BuildWebHost(string[] args)
    {
        var builder = new ConfigurationBuilder()
            .AddJsonFile("appsettings.json")
            .AddCommandLine(args)
            .AddEnvironmentVariables();
        var configuration = builder.Build();
        return WebHost.CreateDefaultBuilder(args)
            .UseConfiguration(configuration)
            .UseUrls($"http://*:{configuration["app:serverPort"]}")
            .UseStartup<Startup>()
            .Build();
    }
Enter fullscreen mode Exit fullscreen mode

Above is actually a sneak peek at the Kestrel configuration in addition to the configuration handling. The code above goes together with this appsettings.json file:

{
  "app": {
    "ApiUrl": "http://127.0.0.1:10010",
    "serverPort": "8096",
    "ProgressEventBusHost": "192.168.32.128",
    "ProgressEventExchangeName": "dev.global.exchange",
    "MaxParallellism": "1",
    "QueueCapacity": "1000",
    "HangSafeGuardTimeSeconds": "300"
  }
}

Enter fullscreen mode Exit fullscreen mode

Hosting in Kestrel and OWIN

Well - you've already seen parts of it in the previous code snippet. As previously mentioned there is a pre-release version of nancy that supports .NET Standard. It has the ominous name Nancy 2.0.0-clinteastwood Most of the old configuration code for Nancy (as per the CustomBootstrapper referenced below) worked untouched. The one thing that was really needed was the Startup class that we referred to in the previous BuildWebHost method:

    class Startup
    {
        private readonly IConfiguration _configuration;

        public Startup(IConfiguration configuration)
        {
            _configuration = configuration;
        }

       public void Configure(IApplicationBuilder app, IHostingEnvironment env)
       {
           Console.WriteLine(env.ContentRootPath);
           Console.WriteLine(Directory.GetCurrentDirectory());

            var appSettings = new AppSettingsProvider(_configuration);
            app.UseOwin(x => x.UseNancy(b =>
            {
               b.Bootstrapper = new CustomBootstrapper(appSettings);
            }));
        }
    }
Enter fullscreen mode Exit fullscreen mode

Dotnet from the command line

Before putting stuff on Docker I made sure that I had the builds and tests working from Windows command line with the dotnet command. Up until now I have been running everything from within Rider. Putting this little script together I had it all compiling and tests green:

dotnet build src/App.sln -c Release
forfiles /p .\src /m *Tests /c "cmd /c dotnet test --no-build -c Release @path"
dotnet publish -c Release -o out -f netcoreapp2.0 src/AppHost/AppHost.csproj
Enter fullscreen mode Exit fullscreen mode

Now for the ultimate test - start the service and see that it responds as expected. Execute the following from the out folder that the solution has been published to:

dotnet AppHost.dll
Enter fullscreen mode Exit fullscreen mode

No luck - "A fatal error was encountered. The library 'hostpolicy.dll' required to execute the application was not found in '…AppHost\out\'.
Failed to run as a self-contained app. If this should be a framework-dependent app, add the …AppHost\out\AppHost.runtimeconfig.json file specifying the appropriate framework." At least it's a very good error message, one AppHost.runtimeconfig.json coming up!

{
  "runtimeOptions": {
    "framework": {
      "name": "Microsoft.NETCore.App",
      "version": "2.0.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Yes! It runs! Pointing a browser to http://localhost:9922 does indeed produce the expected response.

Putting it in Docker

Since I had done all the hard work I expected this to be a walk in the park. Well not quite… It started out quite well, using my superior Google skills I found something that looked like a good starting point https://docs.docker.com/engine/examples/dotnetcore/
I started out with the Dockerfile in the article and tweaked it slightly to make the service build and run the tests in the container:

FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app/src
COPY . .
RUN dotnet restore
RUN dotnet test -c Release DomainModel.Tests/DomainModel.Tests.csproj
RUN dotnet test -c Release App.Services.Tests/App.Services.Tests.csproj
RUN dotnet test -c Release App.Tests/App.Tests.csproj
Enter fullscreen mode Exit fullscreen mode

The domain model worked like a charm, all green, but the other two projects started throwing massive lists of test failures at me. Turns out that (almost) all problems had a common denominator...

Slash vs backslash

I started to write about PC-DOS and the historical background just to show off my age and experience but decided not to. Read this little piece instead if you're interested in the background https://www.howtogeek.com/181774/why-windows-uses-backslashes-and-everything-else-uses-forward-slashes/

The service makes fairly extensive use of files, persisting data as files serialized with protobuf. This means that the use of backslash as folder separators is present in many parts of the code. As I've already hinted at, this immediately broke almost all tests related to files when running under Docker. Fixing this was easy since Windows does in fact accept the forward slash as folder separator.

When writing .NET code for .NET Standard / .NET Core, always use '/' as folder separator. Even better - use Path.Combine to add folder names together since Path.Combine will use the correct separator depending on your OS. The final part is to not use absolute paths (like C:\temp, even written as C:/temp it makes no sense on Linux).

Time zones and Docker containers

Once I got rid of the slash vs backslash failures I had one more red test to tackle. It basically sent a query via Nancy Browser and expected the returned data to exactly match a JSON string. Unfortunately it did not, it showed this:

Failed   ThenTheJsonBodyIsSerializedAsBefore
Error Message:
   String lengths are both 742. Strings differ at index 178.
  Expected: "..."0001-01-01T00:00:00.0000000+01:00","endTime":"0001-01-01T..."
  But was:  "..."0001-01-01T00:00:00.0000000+00:00","endTime":"0001-01-01T..."
  --------------------------------------------^
Enter fullscreen mode Exit fullscreen mode

The test was written by someone in time zone UTC+1 and since I am in Stockholm (UTC+1) it "worked on my machine". However it seems like most Docker containers available for production workloads are set up for UTC and in many (most?) cases simply don't have time zones configurable since /usr/share/zoneinfo is missing. IMO this makes perfect sense - a docker container should behave the same regardless of where it is deployed and therefore time zone specific settings are simply not relevant.

I do think that the test is a bit shady - these types of "replay" tests are fragile and I would prefer to have verification of the actual data rather than the serialized representation. The one exception to this rule-of-thumb would be if you need to ensure backwards compatibility of persisted data.

Rewriting this test is on my todo-list, but for now I simply tweaked the test verification to ignore time zone differences. Hey, I'm not perfect…

Final dockerization pieces

Getting all the necessary files into a single place for final deployment is done with the "dotnet publish" command, putting it in the out folder. The -f flag to define which framework version to use is required in this case, but the docs seems to imply that this may be optional. I suspect it has to do with the fact that I'm mixing targets for different projects.

The really interesting aspect is that the docker file makes use of multi-stage build. This simply means that I can slap anothe FROM statement in my dockerfile and it will create a new image with the possibility to use files from previous stages in the dockerfile. Note that the first FROM references the microsoft/dotnet:sdk image while the second FROM uses microsoft/dotnet:aspnetcore-runtime, which is a significatly smaller image. Finally we copy the output from the publish command onto our new image and set an entry point. I e the runtime image will only contain the bare minimum to run our app, no other tools or non-runtime build artifacts.

For more info on multi-stage builds see http://recurse.se/2017/09/docker-multistage-builds/

FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app/src
# Copy and restore as distinct layers
COPY . .
RUN dotnet restore
RUN dotnet test -c Release DomainModel.Tests/DomainModel.Tests.csproj
RUN dotnet test -c Release App.Services.Tests/App.Services.Tests.csproj
RUN dotnet test -c Release App.Tests/App.Tests.csproj
RUN dotnet publish -c Release -o out -f netcoreapp2.0 AppHost/AppHost.csproj
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/src/AppHost/out .
ENTRYPOINT ["dotnet", "AppHost.dll"]
Enter fullscreen mode Exit fullscreen mode

Lessons learned

  • Don't pull in compatibility stuff to save you the work of rewriting a class or two. The end result will be cleaner and more functional (looking at you Resx files and System.Configuration).
  • Target .NET Standard whenever possible. It allows your code to be used both in old .NET Framework and .NET Core.
  • Use multistage builds to create small Docker containers. 
  • Don't hard-code paths with '\' - Windows is perfectly happy with '/' in most cases. Even better - use Path.Combine which will use the correct separator character depending on your platform.
  • Stand on the shoulders of giants. Not everything that I have written here was solved by yours truly, I was fortunate enough to be able to look at another project that had undergone similar treatment by my colleague Martin Hellspong. Check out his blog here http://recurse.se/

Top comments (0)