Recently, I've started looking into more elaborate ways to automate recurring tasks. Typically, there's functionalities like Test, Compile or Publish in your project that you want to be easily executable, reliable and consistent. Above a certain complexity, you'll reach a point where you have to script or automate them.
This post describes the process of automating builds for a .NET project both locally and in Jenkins CI.
What Options for Automation are there?
There are quite a few options when it comes to that, maybe the most common are:
- Use platform integrated features, such as targets in dotnet csproj files or npm scripts in package.json. They are often good enough for simple things, but scale very badly with complexity and require skills you don't typically use - for example, defining MsBuild Tasks has very little to do with actual .NET programming.
- Scripting via PowerShell on Windows or Bash on Linux and Mac, even maybe going as far as using dotnet script for a really rich feature set. Probably the main reason why scripts are used is exactly to automate tasks. It plays a big role in DevOps, System Administration and of course programming itself. It's very powerful and, if done with discipline, very maintainable even for huge projects and incredibly versatile.
- Finally, specialized build systems and task runners are getting more popular. They often have the benefit of allowing you to use the same tools and techniques you use for your actual project and have a lot of features available directly related to what you need - think prebuilt tasks for testing or deploying.
I've had a good experience with scripting in my projects so far, but never quite managed to get good at PowerShell. So my scripts became more ugly, brittle and harder to maintain the more things they needed to do. Having heard of Cake (C# Make) before, I began looking for .NET build systems.
NUKE Build - The Nuclear Option
While Cake has been around for some time already and is even part of the .NET Foundation, I wanted to see what else there is. While lurking on Reddit one day, I read about NUKE, a newcomer in the .NET automation ecosystem which looked really promising. It's based on the latest tools, integrates directly in your dotnet solution structure and offers some nice pre-built features. I decided to give it a try, and started to migrate a project at work that has been notoriously brittle when it came to deploying.
Following just the Getting Started guide on it's page, it was quickly set up and could go on to do my first task. When using NUKE, it'll integrate directly in your solution. This means there's a .build project getting added that allows you to directly define and debug your build setup from Visual Studio or Visual Studio Code.
Integrating custom tasks, such as using WebDeploy with Azure, was incredible easy. You'll face some patterns of tasks in your build system:
- Prebuilt tasks, such as DotNetBuild() or DotNetPack() which are part of the main NUKE project.
- Plugins that integrate directly in NUKE, such as Nuke.WebDeploy or Nuke.GitHub. They're separate NuGet packages referenced from your build project and then work just as the integrated ones.
- Regular C# code - such as applying Xml transformations, making Http calls - for one-off things specific to your project.
- Calling to CLI tools. It's pretty simple to generate packages for them, but often you just want to call some console commands directly, e.g. running Git commands.
You're free to combine them in any way you want.
Build, Test and Publish with an Example
As with any new tool, there's a bit of a learning curve involved. While it's not a large one, I want to show you with the CoberturaConverter project a complete example of a workflow using NUKE, including continuous integration with Jenkins, automatic deployment of NuGet packages, creation of GitHub releases and automatic versioning of the library.
As described in NUKEs Getting Started guide, you run the following command in PowerShell at the project root:
There's also a variant for Bash available, but I'm mostly a Windows user so that's what I use. The script's going to download the initial bootstrapping code from https://nuke.build/powershell and then executes it. You'll be able to configure a few options, such as the name of the build project, but the standards here should be fine. After the setup, you'll have a new build/ folder containing the NUKE build project at your solution root and some other files:
Metadata file for NUKE. Not too interesting.
- build.cmd, build.ps1 and build.sh
That's your entry to running builds, for example via powershell ./build.ps1 Test to execute unit tests.
This folder contains a .NET SDK project, by default named .build. It comes with a Build.cs file that's essentially a console application which runs your build targets. The clever thing is, this project is automatically added to your *.sln solution file, making your build setup a direct part of your project.
On first run, NUKE downloads its own dotnet runtime. Subsequent builds are a lot faster. If you want, you can also directly run & debug the .build project in Visual Studio.
The following examples make use of a few additional packages, that's what your .build.csproj should look like:
Note that I've added some projects, such as Nuke.WebDocu and Nuke.GitHub. Also, and that's important, use version 4.0.0-beta0012 for GitVersion.CommandLine. By default, NUKE initializes with 4.0.0-beta0011, which lacks some features I'm using later on in this tutorial. Interestingly, I'm directly referencing one of my solution files in the build project. That's because the CoberturaConverter has features to convert coverage reports to the Cobertura format, which is later used in the build.
Using MyGet CI Feeds
NUKE is really actively maintained, so there are sometimes releases on its MyGet development feed that are not yet available on the public NuGet repository. For example, right now, there's already 0.2.26 available in MyGet but only 0.2.10 in NuGet. To access the MyGet feed, there are many options. One of the most easy ones is to just put a file called NuGet.config in the root of the solution and configure the feed in it. It's getting automatically picked up both by Visual Studio and dotnet.
Configuring the Build
The CoberturaConverter package uses different build targets, each serving a distinct purpose. At the top of the Build.cs file, you'll find this:
Main() is an entry point for a console application and also defines the default target to execute. Two injection fields are also used, GitVersion and GitRepository. They both give you access to some metadata about your project. You'll see further down what you can use them for.
Parameters & Properties
Next, there are the Parameters used in the build. They're passed via simple script parameters, e.g. powershell ./build.ps1 Push -MyGetSource https://myget.org/f/my-feed. I've also configured two properties, DocFxFile and ChangeLogFile, that are used in the build.
NUKE picks the [Parameter] fields also from environment variables, so you don't have to explicitly pass them into the build script. Now, let's have a look at the individual targets and what they do.
The Clean target is pretty simple, it's just to ensure that we always have a clean workspace before doing any builds. It cleans the compilation output as well as the build output, so you don't have any previous test results cluttering your new results.
Restore & Compile
Next comes Restore & Compile. It's using the DotNetrestore task which comes directly with NUKE itself. When compiling and using GitVersion, you can rely on your projects version getting inferred from your git history. I'm setting both the FileVersion and the AssemblyVersion to Major.Minor.Patch here.
Both of these targets have dependencies - .DependsOn(Clean) respectively .DependsOn(Restore). This instructs NUKE about required targets to run before.
Pack & Push
The Pack target builds NuGet packages that are then published on a MyGet feed via Push. Here, I'm using an extension method to extract all release information from the file CHANGELOG.md and set it as the package release notes. It's skipping the introduction part and starts directly with the first item in the changelog:
GetCompleteChangeLog() is an extension that's currently available in the Nuke.GitHub package under Nuke.GitHub.ChangeLogExtensions.
Now, we also want to publish these generated packages in the Push target:
Push has, in addition to depending on the Pack target, some requirements:
- The injection parameters MyGetSource and MyGetApiKey must be present. This means they should be passed to the build script, e.g. via powershell ./build.ps1 Push -MyGetSource https://myget.org/f-my-feed -MyGetApiKey <apiKey>. Since this example is from a public project, I'm pushing to MyGet and have the feed accessible for everyone. It's just a matter of supplying different parameters if you want to push to private feeds, for example with a self-hosted ProGet service or to private MyGet feeds.
- The Configuration property must be set to Release. It's a property on the builds NukeBuild base class and is usually set to Debug, except when a build server is recognized. However, just to make sure, simply pass it as -Configuration Release to your build.
Of course you're free to set additional requirements, any predicate is possible here. When the target executes, it's just globbing all your *.nupkg files in the OutputDirectory (defaults to output/) and executes the prebuilt DotNetNuGetPush() task for each.
Probably the single most important target in your builds should be the Test target. You always want to be able to run your tests with a single command, on a clean checkout on any machine. I consider any project that doesn't let you do that broken - running tests should be a top priority.
How you test, however, is of course up to you. There's support for the xUnit console runners built in NUKE itself. I mostly use dotnet xunit for running my C# tests, and there's unfortunately no task yet available for that. But as you see in the above example, it's easy to call whatever command you want with NUKE. Let's take a look at what's happening:
- First, I get all *.csproj files in the test/ folder of my project and execute tests for each. When your solutions are structured differently, adjust this accordingly.
- Instead of using a pre-built task, I want to work with the dotnet xunit CLI tool. So I just use StartProcess (found in ProcessTasks) and supply my parameters.
- There's a call to .DoubleQuoteIfNeeded() in the parameters being passed. This is one of the utility methods that come with NUKE.
- Since there's not yet a global command for xUnit available, I have to set the workingDirectory parameter when I start the process.
- ToolPathResolver.GetPathExecutable("dotnet") looks for the dotnet.exe command which is available on your PATH variable. This is to make sure to use the globally installed one instead of using the one that is downloaded into NUKEs temporary directory.
- I'm passing xunit, --nobuild and -xml parameters to the process to configure what xUnit should do. Since I potentially want to capture the test output from multiple projects, I'm adding a testRun variable to the filename to avoid conflicts.
- Finally, the process is executed. AssertWaitForExit() ensures that execution stops until the process terminates. In most situations, you would want to use AssertZeroExitCode(), but here you'll want to perform all test runs even if some of them fail.
After running all the tests, there's a call to PrependFrameworkToTestresults(). When your test projects have multiple values in their TargetFrameworks attribute, for example netcoreapp2.0;net471, xUnit creates multiple output files in the form of tests-netcoreapp2.0.xml and tests-net471.xml. The following code does just some Xml modification to prepend the framework name to the test name, which makes it easy to assign framework-specific failures when analyzing the test reports.
The Coverage target is, unfortunately, a bit long. That's more because of using dotCover as the tool and not related to NUKE. I've been using OpenCover mostly till now, but it's still giving me a bit of trouble to consistently generate good coverage reports when running .NET Core and Full Framework tests in combination. Also, dotCover is much faster for me! Tl;dr after the code.
The Coverage target can be summarized as:
- For every test project, do a test run and record a coverage snapshot with dotCover.
- Merge all coverage snapshots to a single one.
- Generate a coverage report in DetailedXml format from the merged snapshot.
- Use ReportGenerator to create a formatted Html coverage report for later visualization in the Continuous Integration server.
- Call DotCoverToCobertura() to transform the dotCover report to Cobertura. This is done so that Jenkins CI can pick up coverage metrics as build metadata.
Tip: Nuke provides a lot of useful utilities and extension functions. For example, instead of wrapping paths in arguments in \", you could also call .DoubleQuote() from Nuke.Core.Utilities.StringExtensions for better readability.
Documentation Generation & Upload
With DocFX, there's a tool that's allowing you to automatically generate API docs for .NET projects along with Markdown based documentation that you write yourself. I've blogged about it before, and I won't detail configuring DocFX itself. The targets don't show something new, other than temporary renaming the README.md to index.md to use it as entry point in the documentation and clean up afterwards.
After the docs have been built, I want to upload them so they're accessible. I'm hosting all my projects documentation at docs.dangl-it.com in an ASP.NET Core MVC application giving me documentation versioning and user access control. The UploadDocumentation target uses the Nuke.WebDocu plugin to achieve this:
GitHub Release Publishing
One very tedious task for me has been the preparation of releases on GitHub. Whenever I released a stable version of anything, I had to tag the commit, upload artifacts and describe my changes on GitHub. So I made Nuke.GitHub to automate this process for me:
Here's something new right at the top in the definition of the target: it's executed OnlyWhen() we're on the master branch. On the build server, it's getting called for every build, but it is skipped as long as we're not on the master branch.
The mechanism is simple: Whenever a build is executed on the master branch, it uses GitVersion to create a release tag, gets the changelog, collects all the created NuGet packages to use as artifacts and runs the PublishRelease() task in Nuke.GitHub. If the release is already present, nothing happens, so it's safe to run it multiple times.
Now that you've seen how to do the basic tasks with NUKE, I want to show you how to use GitVersion. I'm certainly no expert in it, and have just read enough of the docs to get working what I want, but it's basically a tool that creates deterministic and reproducible versioning by looking at the git history. For example, tags like v1.2.0 on a commit lead to this one producing a matching version. Subsequent commits increment this version. I'm using the following GitVersion.yml at the project root to manually configure how the assembly versions should look like and to only ever increment the Patch part of the version.
Additionally, the regex to detect the develop branch got a (origin/)? at the front, since Jenkins does usually checkout to a local branch origin/originalBranch in CI builds.
Integration with Jenkins
Projects relying on NUKE are ideal to be used in continuous integration & deployment tools like Jenkins. There are two jobs configured for CoberturaConverter:
This is pretty simple and has a single PowerShell build step: ./build.ps1 Coverage -configuration Debug
Post-Build, it does the usual stuff of collecting test results, coverage reports, TODO items and more.
This job is also simple (that's the whole point of this blog post, it's simple!) and calls multiple NUKE targets:
Tip: NUKE resolves [Parameter]s both from arguments passed to the script as well as from environment variables. This means, you don't have to explicitly pass them in your Jenkins job if you set them directly.
It's not explicitly specifying the Configuration parameter, since NUKE detects it's inside a Jenkins job by looking at the available environment variables and automatically switching to the Release configuration. Other than that, I'm just passing the required parameters as arguments to the build script and let NUKE do the work.