<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="https://blog.dangl.me/rss/xslt"?>
<rss xmlns:a10="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Dangl.Blog();</title>
    <link>https://blog.dangl.me/</link>
    <description>Blogging about .Net, DevOps, Networking and BIM. Home of the free GAEB Converter.</description>
    <generator>Articulate, blogging built on Umbraco</generator>
    <item>
      <guid isPermaLink="false">1446</guid>
      <link>https://blog.dangl.me/archive/installing-net-9-alongside-older-versions-on-ubuntu-2204/</link>
      <category>DotNet</category>
      <title>Installing .NET 9 Alongside Older Versions on Ubuntu 22.04</title>
      <description>&lt;p&gt;With the recently release .NET 9, Microsoft has changed how they're publishing packages for Linux distributions. Previously, there was a Microsoft managed package repository available from which you could easily install different .NET versions, making it especially easy for CI agents to keep multiple .NET versions installed and updated.&lt;/p&gt;
&lt;p&gt;However, starting with .NET 9, Microsoft no longer provides a feed for their distributions. So if you need the SDK or just the runtime, you need to use a feed managed by Canonical (the company behind Ubuntu). The problem is, there's no feed on which you can get multiple versions, e.g. .NET 8 and .NET 9 combined. Also, you can't just install from two different repositories, since the packages will have mutually exclusive dependencies, prohibiting you from installing .NET 8 and 9 at the same time. Also, the Ubuntu repositories seem to be a bit behind, so you could have to wait up to one week to get the latest updates for your SDKs.&lt;br /&gt;I didn't properly dive into the reasons behind this change, I think it's mostly Microsoft wanting to integrate their .NET distribution into more official channels, but from a developer perspective, that doesn't seem very ideal for me.&lt;/p&gt;
&lt;p&gt;So, yesterday evening, I tried to figure out how to get .NET 9 running while keeping all the other versions installed. Even though they're out of support, sometimes you just need to update an on-off CLI tool with a bugfix or new feature without having to update the whole project and the CI pipeline. Also, since both .NET 8 and .NET 9 are still in active support, we plan to also run tests for our products on both - so having them all on our CI agents was required.&lt;/p&gt;
&lt;p&gt;There was no way that I found how to work with feeds, but luckily, you can just use the &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/install/linux-scripted-manual#scripted-install" title="Scripted Install for DotNet" data-anchor="#scripted-install"&gt;dotnet-install.sh script for manual SDK installations&lt;/a&gt;. I found it pretty easy to use, but there's a gotcha: If you've already got some .NET versions installed, they're probably by default in a different directory than what the installation script uses. In that case, though, before installation, you can simply run &lt;span class="Code"&gt;dotnet --list-sdks&lt;/span&gt; to get an output like this:&lt;br /&gt;&lt;span class="Code"&gt;dotnet --list-sdks&lt;/span&gt;&lt;br /&gt;&lt;span class="Code"&gt;6.0.428 [/usr/share/dotnet/sdk]&lt;/span&gt;&lt;br /&gt;&lt;span class="Code"&gt;7.0.410 [/usr/share/dotnet/sdk]&lt;/span&gt;&lt;br /&gt;&lt;span class="Code"&gt;8.0.404 [/usr/share/dotnet/sdk]&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Here, we see that .NET is installed to &lt;span class="Code"&gt;/usr/share/dotnet&lt;/span&gt;, which we can then pass to the installation script as directory like this:&lt;br /&gt;&lt;span class="Code"&gt;./dotnet-install.sh --install-dir /usr/share/dotnet --channel 9.0&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Running &lt;span class="Code"&gt;dotnet --list-sdks&lt;/span&gt; again should now list .NET 9 as well, and you're good to go. There's no need to set any more environment variables. The only thing to keep in mind is that you now have to manually update .NET via the install script, since &lt;span class="Code"&gt;apt-get&lt;/span&gt; won't do those for the .NET 9 sdk now.&lt;/p&gt;
&lt;p&gt;Happy developing!&lt;/p&gt;</description>
      <pubDate>Fri, 29 Nov 2024 11:29:43 Z</pubDate>
      <a10:updated>2024-11-29T11:29:43Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1441</guid>
      <link>https://blog.dangl.me/archive/creating-an-api-and-web-ui-for-my-district-heating-system/</link>
      <category>Home Automation</category>
      <title>Creating an API and Web UI for my District Heating System</title>
      <description>&lt;p&gt;A few months ago, we had a new heating system installed. It's a district heating system, with the heat exchange being controlled by a Schneid MR 12 control unit. From a heating perspective, it's a pretty simple concept: Two pipes go in your hose, one delivers hot water to a heat exchanger, the other carries the slightly colder water away. We're now part of a small network of 15 houses that use a central energy source for their heating and warm water needs.&lt;/p&gt;
&lt;p&gt;It's working great, and replaced our own wood chip fired system. Now, with a new system, I naturally wanted to be able to get some data out of it and have the ability to control it remotely. I looked into some options, and I have to say, it's all a bit less comfortable than what we software engineers are used to with modern cloud systems. Nevertheless, the manufacturer could provide me with a network card, which makes a &lt;a href="https://en.wikipedia.org/wiki/Modbus" title="Wikipedia - Modbus"&gt;Modbus TCP&lt;/a&gt; interface available. It's quite an ancient protocol, having no security features or really anything besides reading and writing raw "registers", but it's quite well established in electronic engineering and thus has a wide range of libraries and packages available. The security part was pretty easy to take care of, by isolating it in the network and only granting specific hosts access.&lt;/p&gt;
&lt;p&gt;For the actual API part, I've gone for a regular ASP.NET Core backend with an Angular frontend. The Modbus TCP connection is handled by &lt;a href="https://github.com/Apollo3zehn/FluentModbus" title="GitHub - Appollo3zehn - FluentModbus"&gt;FluentModbus&lt;/a&gt;, which provides an easy wrapper around the Modbus protocol so we can just use it to read and write binary data.&lt;/p&gt;
&lt;p&gt;Finally, I've built a dashboard as the main entry point:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://blog.dangl.me/media/1196/visualization.png" alt="Dangl.SchneidControl Dashboard" data-udi="umb://media/ff98384f4eae4f24a51b43622aefb904" /&gt;&lt;/p&gt;
&lt;p&gt;It's great for a quick visualization of the data, and with a local SQLite database, I can even periodically log some data points and visualize them, like the outside temperature:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://blog.dangl.me/media/1195/data_logging.png" alt="Dangl.SchneidControl Data Logging" data-udi="umb://media/56eebf2ee3c2407cbb57bfd69f7026cf" /&gt;&lt;/p&gt;
&lt;p&gt;There are also some properties that can be edited, like target temperatures or the operation mode of the system. And, all of those values are exposed via a REST API, so you can integrate the sensors e.g. as RESTful sensors in Home Assistant.&lt;/p&gt;
&lt;p&gt;You can &lt;a href="https://github.com/GeorgDangl/Dangl.SchneidControl" title="GitHub - GeorgDangl - Dangl.SchneidControl"&gt;find the project directly on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Happy heating!&lt;/p&gt;</description>
      <pubDate>Thu, 13 Jul 2023 19:19:25 Z</pubDate>
      <a10:updated>2023-07-13T19:19:25Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1440</guid>
      <link>https://blog.dangl.me/archive/accessing-avacloud-directly-with-user-accounts/</link>
      <category>GAEB</category>
      <title>Accessing AVACloud Directly with User Accounts</title>
      <description>&lt;blockquote&gt;Disclaimer: &lt;a href="https://www.dangl-it.com/articles/accessing-avacloud-directly-with-user-accounts/" title="Dangl IT - Accessing AVACloud Directly with User Accounts"&gt;This is a cross-post from my professional website&lt;/a&gt;. It's a tutorial for a product my company sells.&lt;/blockquote&gt;
&lt;p&gt;With &lt;a href="https://www.dangl-it.com/products/avacloud-gaeb-saas/" title="Dangl IT GmbH - Products - AVACloud"&gt;AVACloud&lt;/a&gt;, you would usually use service based accounts to access the API. However, recently we've had an interesting use case: A client did the integration for AVACloud in an Excel AddIn, and for this, we wanted to go with individual &lt;a href="https://identity.dangl-it.com/" title="Dangl.Identity"&gt;Dangl.Identity&lt;/a&gt; user accounts instead of centrally managed OAuth2 clients. Since the default &lt;span class="Code"&gt;AvaCloudClientFactory&lt;/span&gt; from our &lt;a href="https://www.nuget.org/packages/Dangl.AVACloud.Client.Public" title="NuGet - Dangl.AVACloud.Client.Public"&gt;Dangl.AVACloud package&lt;/a&gt; assumes that you're working with clients, you need to add some plumbing code to work with user accounts. It's pretty straightforward, but you need to follow a few steps to achieve it. So, here's how to do it:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="14cd0c2a2866c091e521593eaadc20c4" data-gist-file="UserAccountTokenHandler.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We're first creating a class called &lt;span class="Code"&gt;UserAccountTokenHandler&lt;/span&gt;, which implements the &lt;span class="Code"&gt;ITokenHandler&lt;/span&gt; interface. This will try to obtain a Json Web Token (JWT) directly from AVACloud, with provided user credentials. It's used to replace the OpenID Connect based client credentials authentication flow used by default, and allows you to authenticate with AVACloud with a real user context. The implementation is pretty straightforward, getting a token is a single call to the AVACloud API.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="14cd0c2a2866c091e521593eaadc20c4" data-gist-file="TokenAccessChecker.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We're then adding the &lt;span class="Code"&gt;TokenAccessChecker&lt;/span&gt; class, which just gets a token and checks if the user does have access to perform AVACloud operations. That way, we can show a notification in the UI if AVACloud access is denied, before waiting to see if service calls fail with a &lt;span class="Code"&gt;403 - Forbidden&lt;/span&gt; status code response.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="14cd0c2a2866c091e521593eaadc20c4" data-gist-file="UserAccountHttpClientAccessor.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="Code"&gt;UserAccountHttpClientAccessor&lt;/span&gt; is a small utility class that's just a wrapper around &lt;span class="Code"&gt;HttpClient&lt;/span&gt;. We're using that to be able to provide a typed &lt;span class="Code"&gt;HttpClient&lt;/span&gt; with the &lt;span class="Code"&gt;HttpClientFactory&lt;/span&gt; pattern. We could also use named clients, but I usually prefer to wrap the Http client in a separate class.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="14cd0c2a2866c091e521593eaadc20c4" data-gist-file="AvaCloudUserClientFactory.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now that everything's set up, we want some more convenience. Working with REST APIs always has some overhead, like managing &lt;span class="Code"&gt;HttpClient&lt;/span&gt; lifecycle and ensuring each request is properly authenticated. Since I'm a big fan of using dependency injection, I usually create factory classes that can be used a singletons throughout the whole app lifecycle, and which do their internal service management. So, in our case, we're just initializing a single instance of &lt;span class="Code"&gt;AvaCloudUserClientFactory&lt;/span&gt; and then rely on it's internal service provider when we want to get a client class.&lt;/p&gt;
&lt;p&gt;Finally, here's a test showing you a quick example that gets a token, checks if the user has access and then converts an Excel file using AVACloud:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="14cd0c2a2866c091e521593eaadc20c4" data-gist-file="UserAccountAuthenticationTests.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Happy converting!&lt;/p&gt;</description>
      <pubDate>Wed, 05 Jul 2023 11:42:51 Z</pubDate>
      <a10:updated>2023-07-05T11:42:51Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1439</guid>
      <link>https://blog.dangl.me/archive/setting-the-language-in-chrome-headless-for-e2e-tests/</link>
      <category>Web Development</category>
      <title>Setting the Language in Chrome Headless for E2E Tests</title>
      <description>&lt;p&gt;For End-to-End (E2E) testing of web apps, the headless mode of Chrome is a great choice. We're using it in our projects to ensure some true end to end test coverage, often for stories like happy paths for important actions. E2E tests are usually quite cumbersome to write, and tend to be brittle during upgrades, but they give you a lot of confidence when doing automated deployments, and free up valuable developer time from having to do repeated, manual tests and verifications. I've blogged before about them, for example hows to &lt;a data-udi="umb://document/97c1f03c1f504869b17557ed82d8069a" href="/archive/running-fully-automated-e2e-tests-in-electron-in-a-docker-container-with-playwright/" title="Running Fully Automated E2E Tests in Electron in a Docker Container with Playwright"&gt;instrument them via Playwright&lt;/a&gt; or how you can use &lt;a data-udi="umb://document/bcf39609aba94771a7795e140656ff03" href="/archive/improving-aspnet-core-end-to-end-tests-with-selenium-docker-images/" title="Improving ASP.NET Core End-to-End Tests with Selenium Docker Images"&gt;Selenium in a Docker container&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;One thing that's been missing is the ability to set a language for the headless Chrome instance when testing. Especially if you have multilingual apps that automatically detect the users language, you might experience problems in testing when you're checking for text contents or trying to find Html elements via their labels. There was a long open issue with Chrome, where the &lt;a href="https://bugs.chromium.org/p/chromedriver/issues/detail?id=1925" title="Chromedriver Issue 1925: Expose content settings in headless mode" data-anchor="?id=1925"&gt;language supplied via a command line argument was simply not set in headless mode&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But, this has changed with &lt;a href="https://developer.chrome.com/articles/new-headless/" title="New Chrome Headless Mode"&gt;the new headless option for chrome&lt;/a&gt;!&lt;/p&gt;
&lt;p&gt;Finally, when you start headless via &lt;span class="Code"&gt;--headless=new&lt;/span&gt;, you can simply set the language with another switch: &lt;span class="Code"&gt;--lang=en&lt;/span&gt;. This makes testing so much easier, and allows to provide a consistent developer experience, no matter the local language settings.&lt;/p&gt;
&lt;p&gt;Happy testing!&lt;/p&gt;</description>
      <pubDate>Thu, 01 Jun 2023 10:17:26 Z</pubDate>
      <a10:updated>2023-06-01T10:17:26Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1438</guid>
      <link>https://blog.dangl.me/archive/aspnet-core-locally-serving-outdated-dev-certificate/</link>
      <category>DotNet</category>
      <title>ASP.NET Core Locally Serving Outdated Dev Certificate</title>
      <description>&lt;p&gt;Today, I was trying to run one of our apps locally. It's a fairly small service, that's only receiving updates every few months, and all the testing and deployment happens in CI, so it hasn't been run locally for quite some time. But, when I started it, it tried to serve an expired certificate, so Chrome showed me a warning about the site not being secure.&lt;/p&gt;
&lt;p&gt;With ASP.NET Core, the development workflow is quite nice. You can easily use self signed certificates for working with localhost, and &lt;span class="Code"&gt;dotnet&lt;/span&gt; takes care of managing all that. Yet, somehow, it served me a certificate that expired almost three years ago. Even after regenerating the dev certificates locally, I had the same error, so I looked into the configuration.&lt;/p&gt;
&lt;p&gt;Turns out, I had this configuration option set in the user secrets on that machine: &lt;span class="Code"&gt;kestrel:certificates:development:password&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;User secrets are a way of managing not really sensitive but developer or machine specific configs for ASP.NET Core projects. The option above, with using a set password for a certificate, resulted in the app not automatically choosing the current dev certificate but instead one that expired long ago. So, quick fix when you know where to look😀&lt;/p&gt;
&lt;p&gt;Happy encrypting!&lt;/p&gt;</description>
      <pubDate>Wed, 17 May 2023 20:19:36 Z</pubDate>
      <a10:updated>2023-05-17T20:19:36Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1437</guid>
      <link>https://blog.dangl.me/archive/changing-the-order-of-parameters-in-swagger-openapi-documents-when-using-nswag-to-generate-the-swaggerfile/</link>
      <category>Web Development</category>
      <title>Changing the Order of Parameters in Swagger / OpenAPI Documents when using NSwag to Generate the Swaggerfile</title>
      <description>&lt;p&gt;Our &lt;a href="https://www.dangl-it.com/products/avacloud-gaeb-saas/" title="Dangl IT GmbH - AVACloud"&gt;AVACloud SaaS&lt;/a&gt; product has been available for a few years already. While that's a great achievement in itself, it also comes with drawbacks: The API is used by a lot of customers by now, which requires us to keep it stable and not introduce breaking changes or incompatibilities with existing implementations. &lt;em&gt;Move fast&lt;/em&gt; still applies, but the &lt;em&gt;break things&lt;/em&gt; part would now be a problem.&lt;/p&gt;
&lt;p&gt;So, we've recently updated one endpoint to support additional parameters. It's a simple REST endpoint that takes a file, some options, and does a conversion. In the.NET backend, the controller receives three parameters: The file, and two options objects for the import respectively export configuration. That maps nicely, and introducing additional parameters is as easy as adding a new property to a view model used for the bindings. Many clients generated via Swagger can follow the same approach of using options objects for the client interface, thus no problem there.&lt;/p&gt;
&lt;p&gt;However, the objects are transported as query parameters. This means that from a Swagger perspective, we've now added two new query parameters. This can cause problems when the new parameters are not sorted to the end, thus generating a new client version would break existing implementations and require attention and a rewrite from every consumer of the API.&lt;/p&gt;
&lt;p&gt;Luckily, we're using &lt;a href="https://github.com/RicoSuter/NSwag" title="GitHub - RicoSuter - NSwag"&gt;NSwag in the backend&lt;/a&gt;, which allows a great deal of customization. When we set up Swagger generation, we can simply inject custom code in the &lt;span class="Code"&gt;PostProcess&lt;/span&gt; callback and sort the parameters however we want.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="ecf19226ac8edc162862058dc38d7d0d" data-gist-file="SwaggerExtensions.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We're just getting the current parameters, apply our custom sort logic which ensures the new parameters are at the end, and then clear and refill the parameter list.&lt;/p&gt;
&lt;p&gt;There are other options available, such as using custom &lt;span class="Code"&gt;x-position&lt;/span&gt; attributes, but I've found that relying on such non-standard features often causes problems later with some client generators. So, let's keep it simple and stupid😀&lt;/p&gt;
&lt;p&gt;Happy customizing!&lt;/p&gt;</description>
      <pubDate>Sun, 16 Apr 2023 20:15:53 Z</pubDate>
      <a10:updated>2023-04-16T20:15:53Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1436</guid>
      <link>https://blog.dangl.me/archive/signing-electron-apps-before-bundling-with-azure-key-vault-and-ev-code-signing-certificates/</link>
      <category>Continuous Integration</category>
      <title>Signing Electron Apps before Bundling with Azure Key Vault and EV Code Signing Certificates</title>
      <description>&lt;p&gt;Just a few days ago, I've blogged about &lt;a data-udi="umb://document/97c1f03c1f504869b17557ed82d8069a" href="/archive/running-fully-automated-e2e-tests-in-electron-in-a-docker-container-with-playwright/" title="Running Fully Automated E2E Tests in Electron in a Docker Container with Playwright"&gt;running E2E tests for an Electron app&lt;/a&gt;. But, once tested and verified, the next step is deployment.&lt;/p&gt;
&lt;p&gt;If you've ever shipped an application for Windows users, you might be aware of the Windows SmartScreen filter. It's basically a check that might pop up and warn your users in case Microsoft doesn't have any data on your app being trustworthy. That's actually a great feature, since it encourages application developers to sign their programs and therefore increase overall app security. Except for when your build pipeline becomes compromised, as in the recent SolarWinds attack...&lt;/p&gt;
&lt;p&gt;However, I'd like to focus on the technical aspects. For code signing certificates, there are two options available: Either a regular certificate, which has quite a low bar for verification and is available for less than 100,- € per year, or an &lt;em&gt;Extended Validation&lt;/em&gt; (EV) certificate. The EV ones are much pricier, and do require the certificate issuer to perform a more in-depth check before handing it out. You pay more, but you get more: In contrast to regular certificates, which require a certain number of installs until Windows SmartScreen recognizes them as trustworthy, EV certificates work out of the box and don't show any warnings to your users. So, for anything commercial, you probably want to go with an EV certificate.&lt;/p&gt;
&lt;p&gt;But it also comes with a catch: You usually need some hardware device to secure it, often an USB dongle or a &lt;em&gt;Hardware Security Module&lt;/em&gt; (HSM). That's just a requirement for the heightened security about EV certificates, but this also means that it's harder to integrate it in automated build workflows. You don't want to be in a position where you need a dedicated build computer in your office where you manually need to insert some USB dongle to generate a release.&lt;/p&gt;
&lt;p&gt;Here comes Azure Key Vault: It's a service available in the Azure Cloud which just kind of does everything around securing stuff, hence the name. It's typically used for things like secret management, e.g. to handle configuration for cloud services and the like, or just for plain SSL certificate management, e.g. with Let's Encrypt management via &lt;a href="https://github.com/shibayan/keyvault-acmebot" title="GitHub - shibayan - keyvault-acmebot"&gt;shibayan's key-vault-acme-bot&lt;/a&gt;. The part that's interesting for us in this article, however, is that the premium tier of Azure Key Vault allows you to store certificates on a HSM, securely on Microsoft infrastructure.&lt;/p&gt;
&lt;p&gt;When going that way, you give up the option of exporting the certificate, e.g. temporary to your build server to sign applications. That would introduce too much of a security risk, so you need to remotely use the signing feature of Azure Key Vault to sign your apps.&lt;/p&gt;
&lt;p&gt;And this is where this blog post is headed: When building an Electron app, you've got two steps. First, you build &amp;amp; publish the app, then it's getting packaged. Now, you want to sign both outputs, the actual &lt;span class="Code"&gt;*.exe&lt;/span&gt; files in the bundle as well as the installer itself. So, I'll show you some code that performs the build, calls a hook between the publish and package step to sign everything, and then finally signs the installer itself. All done via remote signing in Azure Key Vault.&lt;/p&gt;
&lt;p&gt;I'm a big fan of &lt;a data-udi="umb://document/6ecb73fad5484466b0fc73c1a80f79c3" href="/archive/escalating-automation-the-nuclear-option/" title="Escalating Automation - The Nuclear Option"&gt;the NUKE build system&lt;/a&gt;, so I'm using that for the actual build automation. The concept should be easily translatable to whatever system you're using. So, let's look at the code finally!&lt;/p&gt;
&lt;p&gt;First, we're starting with this build script:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f3f82bd3d865e9c66466c38c247d6584" data-gist-file="BuildElectron.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The part above is the target in NUKE that generates the Electron bundle. In this specific example, we're &lt;a data-udi="umb://document/9476ba07bd354057aece5444330de399" href="/archive/transform-your-aspnet-core-website-into-a-single-file-executable-desktop-app/" title="Transform your ASP.NET Core Website into a Single File Executable Desktop App"&gt;even using Electron.NET&lt;/a&gt;, since the app itself is really just a website that some customers want to use as a desktop application. I won't go into too much detail here, since the build process should be pretty straightforward. It's really just a wrapper around electron-builder with something injected here and there.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f3f82bd3d865e9c66466c38c247d6584" data-gist-file="electron.manifest.json"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This second part is a bit more interesting here. The &lt;span class="Code"&gt;electron.manifest.json&lt;/span&gt; is your app definition, we're configuring, for example, that the Windows target should use the &lt;span class="Code"&gt;nsis&lt;/span&gt; installer. But the real fun is on &lt;span class="Code"&gt;line 11&lt;/span&gt;: We're defining a JavaScript file that is called during the build, in our case right after the built-in signing task is completed.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f3f82bd3d865e9c66466c38c247d6584" data-gist-file="electronAfterPackHook.js"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class="Code"&gt;electronAfterPackHook.js&lt;/span&gt;, again, is a simple script. It's really just calling back out to the NUKE build system to run the &lt;span class="Code"&gt;SignExecutables&lt;/span&gt; target and passes the current output directory as an argument. The magic here is that this is now called during your build, after the executable has been created and patched with the icon, but before it's being bundled up by the installer.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f3f82bd3d865e9c66466c38c247d6584" data-gist-file="BuildSign.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally, we're using the &lt;span class="Code"&gt;AzureSign&lt;/span&gt; package to sign our executable with the certificate in Azure Key Vault.&lt;/p&gt;
&lt;p&gt;After everything's done here, the build continues. At the end, the parent NUKE process just signs the installers and we're done. The next step would be publishing, for us that means generating the documentation and deploying the artifact.&lt;/p&gt;
&lt;p&gt;So, you've seen we're really doing something like in the movie &lt;em&gt;Inception&lt;/em&gt;: We're calling our build system from a hook from a build process initiated by the build system itself. It feels a bit Rube-Goldberg-y, but it works fine and is pretty simple to setup &amp;amp; understand.&lt;/p&gt;
&lt;p&gt;Happy signing!&lt;/p&gt;</description>
      <pubDate>Wed, 18 Aug 2021 20:42:54 Z</pubDate>
      <a10:updated>2021-08-18T20:42:54Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1435</guid>
      <link>https://blog.dangl.me/archive/running-fully-automated-e2e-tests-in-electron-in-a-docker-container-with-playwright/</link>
      <category>Continuous Integration</category>
      <title>Running Fully Automated E2E Tests in Electron in a Docker Container with Playwright</title>
      <description>&lt;p&gt;If you've read some of my previous articles, you probably know that I'm kind of obsessed with rigorous testing - I'm advocating for having a good, automated quality control for every project I encounter. It's probably one of the biggest time savers there is in software development, whether it's because you catch regressions early or you've got lots of infrastructure set up so that reproducing bugs is a matter of minutes instead of hours.&lt;/p&gt;
&lt;p&gt;For one of our apps, we've got a bit of a complex setup. The backend is a traditional ASP.NET Core application with the usual  moving parts, a relational database, some blob storage and the compute part of the app itself. That's easy to test on its own.&lt;/p&gt;
&lt;p&gt;The frontend part deviates a bit from our regular architectures. Due to the nature of the app, it's built and deployed as an Electron application. Electron is a great tool that &lt;a data-udi="umb://document/9476ba07bd354057aece5444330de399" href="/archive/transform-your-aspnet-core-website-into-a-single-file-executable-desktop-app/" title="Transform your ASP.NET Core Website into a Single File Executable Desktop App"&gt;I've blogged about previously&lt;/a&gt;, but it comes at a cost. It gets the most flak for being inefficient, whether it's due to it having a large memory footprint (both in storage and memory), or being less fast than native applications. However, the upsides are great - since Electron is really just a wrapper around Chromium, it allows you to build desktop applications with all tried &amp;amp; tested web technologies. This makes it a natural choice when your team is building web applications usually and can therefore transition seamlessly to an Electron project.&lt;/p&gt;
&lt;p&gt;However, Electron feels sometimes a bit like building on sand - it works really well, but there are a lot of moving parts under the hood. One obstacle we've encountered was around testing the full application in an end-to-end (E2E) way. To ensure such tests run in a controlled environment, independent of whatever host is currently executing, &lt;a data-udi="umb://document/97c1f03c1f504869b17557ed82d8069a" href="/archive/running-fully-automated-e2e-tests-in-electron-in-a-docker-container-with-playwright/" title="Running Fully Automated E2E Tests in Electron in a Docker Container with Playwright" data-anchor="#"&gt;we're using a small Docker setup&lt;/a&gt; to spin up the entire infrastructure, test it and then tear it down again. That works pretty well with regular web apps, but not so much with Docker.&lt;/p&gt;
&lt;p&gt;When we first started setting this up, we've ran into lots of problems. The biggest maybe was a lack of community information around this issue - it just seemed like not a lot of people are executing such tests, so there was not a lot to be found except small bits here and there. We've finally managed to get it up in the end, but even an external consultant specializing in that area was struggling to set it up. One of the major hurdles was that while there are lots of tools for automating browsers that work fine in Docker, there are not a lot of options when it comes to Electron.&lt;/p&gt;
&lt;p&gt;Then came last week, when we updated the Electron version. That immediately broke our build. It turned out that &lt;a href="https://github.com/electron-userland/spectron" title="GitHub - Spectron"&gt;Spectron, the test runner for Electron&lt;/a&gt;, doesn't have any active maintainers left. And it also looked like &lt;a href="https://github.com/electron-userland/spectron/issues/1021" title="GitHub - Spectron - Issue #1021"&gt;we're not the only ones affected by this&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Luckily, there's a new player around, trying to offer a modern and stable way for browser automation: &lt;a href="https://playwright.dev/" title="Playwright.dev"&gt;Playwright by Microsoft&lt;/a&gt;. It does have official Electron support, is actively maintained and, due to it being just a bit over a year old, was able to use a modern and easy to use API approach.&lt;/p&gt;
&lt;p&gt;After spending some hours on trying to get the original test setup fixed, we've decided to give Playwright a try. To our amazement, it just worked! It took just about an hour to set everything up and migrate the tests. And, our E2E tests started being green again🚀&lt;/p&gt;
&lt;p&gt;I'll try to give you a condensed summary of what we did to get it running, and how it was set up. In reality, we're also spinning up the other services, like backend, database and blob storage, put all the Docker containers in the same network and therefore simulate the app as close to the production environment as possible. But, let's take a look at it file by file:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f2d65986085da494020edf07a5a6b96f" data-gist-file="Dockerfile"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We start with a Dockerfile. That's a rather simple one, the few things worth mentioning are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We're building from the &lt;span class="Code"&gt;node&lt;/span&gt; image, since a lot of dependencies are already present there&lt;/li&gt;
&lt;li&gt;Some tools need to be installed around display drivers, most importantly &lt;span class="Code"&gt;xvfb&lt;/span&gt; which will act as a virtual display inside the Docker container&lt;/li&gt;
&lt;li&gt;A custom entrypoint for the container is provided to execute a script at container start&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code data-gist-id="f2d65986085da494020edf07a5a6b96f" data-gist-file="entrypoint.sh"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The entrypoint itself isn't complicated, either. When looking around for running &lt;span class="Code"&gt;xvfb&lt;/span&gt; in Docker, you'll often find similar snippets. Ours was, in fact, also mostly copied from various sources. Its really just making sure that a virtual display is available before the actual command is run&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f2d65986085da494020edf07a5a6b96f" data-gist-file="common-setup.ts"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The &lt;span class="Code"&gt;common-setup.ts&lt;/span&gt; file is a base for all our tests. Here, we're using playwright and launch a new instance of the app for every test. As you can see, the API could probably not be simpler here, yet it works flawlessly.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f2d65986085da494020edf07a5a6b96f" data-gist-file="maine2e.ts"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally, some tests. They're shortened, but give you a small indication what's possible with playwright, or automated E2E tests in general.&lt;br /&gt;We do usually aim for two things with such tests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We want to get quick feedback to see if the app is generally running, is it able to start, do we get any script errors in the console, can users log in and such&lt;/li&gt;
&lt;li&gt;Additionally, we usually have full E2E tests for the &lt;em&gt;critical paths&lt;/em&gt; in our applications. For example, &lt;em&gt;can new users register, confirm their email, login, start a trial and then upgrade to a paid subscription&lt;/em&gt;? Generally, things a QA department would do but that are easy to automate&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It's sometimes tedious to write such tests, and you most likely won't ever have your full app covered with them. But they're a huge confidence boost, especially when your pipeline automatically deploys right into production.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="f2d65986085da494020edf07a5a6b96f" data-gist-file="startup.sh"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally, to actually run the tests, we're just spinning up the Docker container with a script like the one above. The important part here is that we're now calling &lt;span class="Code"&gt;docker run&lt;/span&gt; to execute end to end tests in the container. We're doing some more optimizations, like using &lt;span class="Code"&gt;tmpfs&lt;/span&gt; for the &lt;span class="Code"&gt;node_modules&lt;/span&gt; folder and mounting the app directory into the container. But in the end, the tests run and we're getting a test results file out that can be processed in CI systems.&lt;/p&gt;
&lt;p&gt;So, happy testing!&lt;/p&gt;</description>
      <pubDate>Thu, 12 Aug 2021 07:03:26 Z</pubDate>
      <a10:updated>2021-08-12T07:03:26Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1434</guid>
      <link>https://blog.dangl.me/archive/updating-azure-app-service-on-linux-for-docker-via-webhooks-from-c/</link>
      <category>Web Development</category>
      <title>Updating Azure App Service on Linux for Docker via Webhooks from C#</title>
      <description>&lt;p&gt;If you're using Azure App Service to run Docker images, you already know how easy it is to get your app running. Azure nicely abstracts everything for you - it may be a bit more expensive than hosting your own Kubernetes cluster on cloud VMs, but it makes more than up for it in simplicity.&lt;/p&gt;
&lt;p&gt;Among the cooler features of using Azure App Service to run Docker ist that you automatically get a &lt;a data-udi="umb://document/0755700f9ef045ee98f516729840107e" href="/archive/deploy-to-azure-app-service-with-no-downtime-by-using-staging-slots/" title="Deploy to Azure App Service with no Downtime by using Staging Slots"&gt;green/blue deployment&lt;/a&gt;. All you have to do is update the Docker image for the App Service and after a few moments, you're running the latest version, without any downtime. This is super convenient when you're deploying Docker to Azure Container Registry, setting up automatic updates is then just a single click.&lt;br /&gt;However, it's a bit more complicated when you're using something else, e.g. GitHub Packages for hosting your Docker containers, since then you're responsible for calling a webhook in Azure to let it know there's a new container version available.&lt;/p&gt;
&lt;p&gt;That's in itself not very complicated, but it has one little point that's worth mentionioning, as you see in the code below:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="70473da2fb96f38f0e0e4091a3dd71c1" data-gist-file="DockerWebhook.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The webhook url in Azure includes an &lt;span class="Code"&gt;UserInfo&lt;/span&gt; part, meaning it's got &lt;span class="Code"&gt;username:password&lt;/span&gt; right there in the url. Most tools will support that out of the box, but in case you're using .NET to send the webhooks, you should be aware that you need to parse that information and use it with Http Basic Authentication.&lt;/p&gt;
&lt;p&gt;By the way, for real life implementations, you probably don't want to hardcode the webhook uri in your build script😀&lt;/p&gt;
&lt;p&gt;Happy deploying!&lt;/p&gt;</description>
      <pubDate>Sun, 16 May 2021 17:43:01 Z</pubDate>
      <a10:updated>2021-05-16T17:43:01Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1433</guid>
      <link>https://blog.dangl.me/archive/lets-use-nuke-to-quickly-deploy-an-app-to-azure-via-zip-deployment/</link>
      <category>Continuous Integration</category>
      <title>Let's use NUKE to Quickly Deploy an App to Azure via Zip Deployment</title>
      <description>&lt;p&gt;It's no secret I'm a big fan of the &lt;a href="http://www.nuke.build/" title="NUKE Build"&gt;NUKE build system&lt;/a&gt;, it's making my life just so much more convenient! So here's a small real world example that demonstrates how easy it is to deploy a static website to an Azure App Service using NUKE and &lt;a href="https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file-or-url" title="Kudu Zip Deployment"&gt;Kudu Zip Deployment&lt;/a&gt;. You can &lt;a href="https://github.com/GeorgDangl/antlr-calculator" title="GitHub - GeorgDangl - antlr-calculator"&gt;check out the repository here&lt;/a&gt; if you want to see it all.&lt;/p&gt;
&lt;p&gt;Recently, I was going through some old repositories on &lt;a href="https://github.com/GeorgDangl" title="GitHub - GeorgDangl"&gt;my GitHub account&lt;/a&gt; for the occasional cleaning - just checking if something needs an update, or is no longer working. &lt;a href="https://github.com/GeorgDangl/antlr-calculator" title="GitHub - GeorgDangl - antlr-calculator"&gt;My antlr-calculator project&lt;/a&gt; hadn't been updated in a while, and the demo site was still hosted on a virtual server. So, a great task to kill an evening was found! I decided to update the build system from just some CLI commands to use NUKE and to additional move the hosting to an Azure App Service.&lt;/p&gt;
&lt;p&gt;Many think that for using NUKE to automate your build, you need a .NET project. But while NUKE leverages C# to set up your build, you can actually use it to automate any task you want. For example, here's how you build an NPM package:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="53b51501765672cb4ccc60973b4068ce" data-gist-file="Build.cs" data-gist-line="76-94"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Or, in this case, build &amp;amp; deploy a static website to Azure. Let's go through this step by step. First, we define the &lt;span class="Code"&gt;Target DeployDemo&lt;/span&gt; in Nuke. It's set up to invoke the &lt;span class="Code"&gt;Clean&lt;/span&gt; target and specifies some required parameters:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="53b51501765672cb4ccc60973b4068ce" data-gist-file="Build.cs" data-gist-line="96-102"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Next, we use NPM to build the site and update a placeholder in &lt;span class="Code"&gt;index.html&lt;/span&gt; for the version:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="53b51501765672cb4ccc60973b4068ce" data-gist-file="Build.cs" data-gist-line="104-108"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Then we get the Base64 encoded authentication header value to deploy to Azure and zip the output into a Zip file:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="53b51501765672cb4ccc60973b4068ce" data-gist-file="Build.cs" data-gist-line="110-112"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally, we POST this zip file to the Kudu API and wait for a response:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="53b51501765672cb4ccc60973b4068ce" data-gist-file="Build.cs" data-gist-line="114-120"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;That's it! Now you've deployed a small websites to Azure. With build automation servers, &lt;a href="https://github.com/GeorgDangl/antlr-calculator/blob/develop/Jenkinsfile" title="GitHub - GeorgDangl - antlr-calculator - Jenkinsfile"&gt;like Jenkins&lt;/a&gt;, GitHub Actions or Azure Pipelines, you can configure your build scripts to run on every commit.&lt;/p&gt;
&lt;p&gt;Happy automating!&lt;/p&gt;</description>
      <pubDate>Mon, 12 Oct 2020 20:15:01 Z</pubDate>
      <a10:updated>2020-10-12T20:15:01Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1432</guid>
      <link>https://blog.dangl.me/archive/running-sql-server-integration-tests-in-net-core-projects-via-docker/</link>
      <category>Continuous Integration</category>
      <title>Running SQL Server Integration Tests in .NET Core Projects via Docker</title>
      <description>&lt;p&gt;I'm a huge fan of test driven development. It helps me deliver fewer defects in my apps and gives me confidence that my changes don't break any existing features.&lt;/p&gt;
&lt;p&gt;While I write mostly unit tests that run quickly, I try to also have lots of integrations tests for all API endpoints in my web applications. This typically involves code that hits a database, so I had to find some way to easily spin up database instances for my tests. I'm a firm believer that running tests should only involve a single command, so relying on any existing database on the build agent was a hard no for me. Additionally, multiple instances might run in parallel on the same machine, e.g. when different branches are built at the same time.&lt;/p&gt;
&lt;p&gt;My initial approach has always involved using a SQLite in memory database, actually a fresh one for every single test. This worked well, was fast enough and dead easy to set up. However, it had one huge problem: I'm not running SQLite in production, so I wasn't really testing the system under the same circumstances and additionally, I could not test features that involved database specific behavior.&lt;/p&gt;
&lt;p&gt;Typically in such a situation, most developers would use a containerized database, such as Microsoft SQL Server. But most approaches like this that I've seen were a bit ugly: the database usually had to be spun up manually before running tests, or at least via a build script. But I wanted a way that works independently of how the tests are run - whether via my build script, by running &lt;span class="Code"&gt;dotnet test&lt;/span&gt; or just by clicking &lt;em&gt;Run&lt;/em&gt; in the Visual Studio Test Explorer.&lt;/p&gt;
&lt;p&gt;Luckily, I've found the great &lt;a href="https://github.com/dotnet/Docker.DotNet" title="GitHub - Docker.DotNet"&gt;Docker.DotNet library&lt;/a&gt; which allows you to access the Docker Daemon directly in your C# code, e.g. during test setup and teardown methods. The integration &amp;amp; setup for such integration tests is then pretty straightforward, but requires a bit of code to get it reliably working. Just follow the snippets below and you should be able to set up something similar!&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="20ecb3873e78053abfc31c0d0458dfb2" data-gist-file="DockerSqlDatabaseUtilities.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The class &lt;span class="Code"&gt;DockerSqlDatabaseUtilities&lt;/span&gt; is where Docker.DotNet is referenced - it's purpose is to ensure that a Microsoft SQL Server Docker image is running and ready for connections.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="20ecb3873e78053abfc31c0d0458dfb2" data-gist-file="SqlServerDockerCollectionFixture.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Next, &lt;span class="Code"&gt;SqlServerDockerCollectionFixture&lt;/span&gt; is a test fixture that essentially just orchestrates the Docker setup and implements XUnit's &lt;span class="Code"&gt;IAsyncLifetime&lt;/span&gt; interface.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="20ecb3873e78053abfc31c0d0458dfb2" data-gist-file="AssemblyInfo.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Further down, we're specifying a custom &lt;span class="Code"&gt;TestFramework&lt;/span&gt; in &lt;span class="Code"&gt;AssemblyInfo.cs&lt;/span&gt;, since we'll be running the test setup from the previous fixture class only once per assembly, and that's not supported out of the box in XUnit. You could refactor the code to internally track if the setup has already been performed, or simply use the &lt;a href="https://github.com/tomaszeman/Xunit.Extensions.Ordering" title="GitHub - Xunit.Extensions.Ordering"&gt;Xunit.Extensions.Ordering&lt;/a&gt; as test framework😀&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="20ecb3873e78053abfc31c0d0458dfb2" data-gist-file="TestHelper.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Another class I'm using is &lt;span class="Code"&gt;TestHelper&lt;/span&gt;. You can guess from the name that it's offering supporting methods during testing, such as providing an in memory instance of our &lt;a href="https://www.dangl-it.com/products/danglidentity/" title="Dangl IT GmbH - Dangl.Identity Product Site"&gt;OpenID Service Dangl.Identity&lt;/a&gt; and performing the actual setup of our in memory ASP.NET Core backend, which is what we actually want to test.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="20ecb3873e78053abfc31c0d0458dfb2" data-gist-file="IntegrationTestBase.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally, we've got an abstract base class &lt;span class="Code"&gt;IntegrationTestBase&lt;/span&gt; that all our tests inherit from. &lt;span class="Code"&gt;TestHelper&lt;/span&gt; is actually provided as a dedicated instance per test, which means that we can run all tests in parallel yet still fully isolated from each other.&lt;/p&gt;
&lt;p&gt;Does this approach work for you? Tell me in the comments!&lt;/p&gt;
&lt;p&gt;Happy testing!&lt;/p&gt;</description>
      <pubDate>Tue, 22 Sep 2020 19:51:36 Z</pubDate>
      <a10:updated>2020-09-22T19:51:36Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1431</guid>
      <link>https://blog.dangl.me/archive/simple-and-quick-way-to-backup-jenkins-to-azure-blob-storage/</link>
      <category>Continuous Integration</category>
      <title>Simple and Quick Way to Backup Jenkins to Azure Blob Storage</title>
      <description>&lt;p&gt;For years, I've been a happy user of &lt;a href="https://www.jenkins.io/" title="Jenkins"&gt;Jenkins&lt;/a&gt; to automate all our Continuous Integration &amp;amp; Continuous Delivery CI/CD steps. Just recently, I've been evaluating some other, more modern platforms. While trying out GitHub Actions and Azure DevOps, I've been generally satisfied, but found that both don't fit perfectly for what I want. GitHub Actions is still in it's early stages and lacks many features, and with Azure DevOps I've felt the setup to be a bit too complicated and tightly integrated with Azure services. Jenkins just could do everything quite well while giving you lots of freedom.&lt;/p&gt;
&lt;p&gt;These findings resulted in me not changing the existing setup too much - two servers, one running Windows and one on Linux worked fine so far. However, I've been a bit unsatisfied with the server's performance characteristics. The instances were on some virtual servers that were approaching their 4 year mark, so I decided to switch to two up to date, dedicated machines.&lt;/p&gt;
&lt;p&gt;This led me to rethink the backup process. Previously, a cronjob just backed up all the Jenkins data daily to a network share that was provided by the server hosting company, and that felt a bit too outdated and makes accessing backups actually more complicated than necessary. So I decided to just Backup the data to Azure Blob Storage. However, there was no ready made solution that I could find, so I had to roll my own.&lt;/p&gt;
&lt;p&gt;Luckily, Jenkins uses just file storage in it's &lt;span class="Code"&gt;JENKINS_HOME&lt;/span&gt; directory for all configurations, from users to jobs to plugins, so backing up the configuration is actually pretty easy - just copy the parts you want to be backed up. For this, I decided to directly leverage Jenkins itself to run the backup jobs, with a simply script that backs up the data to a Zip file and uploads it to the cloud.&lt;/p&gt;
&lt;p&gt;Essentially, it boils down to two parts, all of them are &lt;a href="https://github.com/GeorgDangl/JenkinsBackup" title="GitHub - GeorgDangl - JenkinsBackup"&gt;available directly on GitHub&lt;/a&gt;. First, the &lt;span class="Code"&gt;Jenkinsfile&lt;/span&gt; which configures the job itself:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="aa7b522d0d889e99cd45e77448b04a6e" data-gist-file="Jenkinsfile"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;It's really just doing two things: Specify a trigger that runs once a month and then invoke the actual backup script. If your master is not a Windows machine, just call &lt;span class="Code"&gt;build.cmd BackupInstance&lt;/span&gt; instead of the PowerShell version.&lt;/p&gt;
&lt;p&gt;The script itself leverages the &lt;a href="http://www.nuke.build/" title="nuke.build"&gt;NUKE build automation tool&lt;/a&gt;, which itself is an awesome asset that &lt;a data-udi="umb://document/6ecb73fad5484466b0fc73c1a80f79c3" href="/archive/escalating-automation-the-nuclear-option/" title="Escalating Automation - The Nuclear Option"&gt;I've blogged about previously&lt;/a&gt;. It's an engine that lets you write build scripts in .NET and execute them anywhere. This backup script is a great example - it doesn't require anything preinstalled, you just run &lt;span class="Code"&gt;build.cmd&lt;/span&gt; and it works! &lt;a href="https://github.com/GeorgDangl/JenkinsBackup/blob/master/build/Build.cs#L116" title="GitHub - GeorgDangl - JenkinsBackup - Build.cs" data-anchor="#L116"&gt;You can view the full script here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Happy Automating!&lt;/p&gt;</description>
      <pubDate>Sun, 14 Jun 2020 14:52:13 Z</pubDate>
      <a10:updated>2020-06-14T14:52:13Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1430</guid>
      <link>https://blog.dangl.me/archive/improving-aspnet-core-end-to-end-tests-with-selenium-docker-images/</link>
      <category>Web Development</category>
      <title>Improving ASP.NET Core End-to-End Tests with Selenium Docker Images</title>
      <description>&lt;p&gt;Last year, &lt;a data-udi="umb://document/399075438e934a2ebc7c10226d8cb86c" href="/archive/performing-end-to-end-tests-for-a-net-core-web-api-with-an-angular-frontend-in-docker-with-jenkins/" title="Performing End-to-End Tests for a .NET Core Web API with an Angular Frontend in Docker with Jenkins"&gt;I've blogged about automating end-to-end (E2E) tests&lt;/a&gt; with an ASP.NET Core application. Quickly summarized, you're setting up your environment locally via Docker and then use Seleniums ChromeDriver to automate UI tests for your web application. This worked well, but had one big drawback - when you're executing the tests, you need to have Chrome installed on the build agent or on your local machine. While most who do web development probably have Chrome available, it's a bit of a hassle with testing - you need to make sure that your Selenium version matches the current Chrome version.&lt;/p&gt;
&lt;p&gt;Luckily, &lt;a href="https://github.com/SeleniumHQ/docker-selenium" title="GitHub - SeleniumHQ - docker-selenium"&gt;Selenium provides Docker containers&lt;/a&gt; with an embedded Chrome and their own RemoteWebDriver. This means to be able to access an automated browser via Selenium, you just have to spin up a container and connect to it. That's pretty great and makes versioning really easy!&lt;/p&gt;
&lt;p&gt;Building on &lt;a data-udi="umb://document/399075438e934a2ebc7c10226d8cb86c" href="/archive/performing-end-to-end-tests-for-a-net-core-web-api-with-an-angular-frontend-in-docker-with-jenkins/" title="Performing End-to-End Tests for a .NET Core Web API with an Angular Frontend in Docker with Jenkins"&gt;the previous post&lt;/a&gt;, you just have to follow these steps:&lt;/p&gt;
&lt;h2&gt;Docker Configuration&lt;/h2&gt;
&lt;p&gt;&lt;code data-gist-id="7f33008d2fffdef4eee932eb2a8dde90" data-gist-file="docker-compose.yml" data-gist-highlight-line="43-47"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I'm using the &lt;span class="Code"&gt;selenium/standalone-chrome:3.141.59&lt;/span&gt; image and make sure to assign it enough memory, as per the official recommendation.&lt;/p&gt;
&lt;h2&gt;Test Setup&lt;/h2&gt;
&lt;p&gt;&lt;code data-gist-id="7f33008d2fffdef4eee932eb2a8dde90" data-gist-file="E2eTestsBase.cs" data-gist-highlight-line="32-37"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;In the test project, I simply check if the containerized Selenium driver should be used and return a &lt;span class="Code"&gt;RemoteWebDriver&lt;/span&gt; instead of a &lt;span class="Code"&gt;ChromeDriver&lt;/span&gt;. For context, I do sometimes let the tests run with my local version of Chrome and have the debugger attached, so I'm not using the headless mode there but instead can track the actions.&lt;/p&gt;
&lt;h2&gt;Browser Automation&lt;/h2&gt;
&lt;p&gt;&lt;code data-gist-id="7f33008d2fffdef4eee932eb2a8dde90" data-gist-file="SignupAsync.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally, here's a simple method that automates the sign up process. You'll note that writing E2E tests is quite cumbersome, because you're really just describing step-by-step what's happening. But you'll be able to test your actual project in real world conditions and have it run automatically, which is pretty much worth it!&lt;br /&gt;You're now able to fully test your app in any environment, all you need is Docker.&lt;/p&gt;
&lt;p&gt;Happy testing!&lt;/p&gt;</description>
      <pubDate>Wed, 20 May 2020 13:28:10 Z</pubDate>
      <a10:updated>2020-05-20T13:28:10Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1426</guid>
      <link>https://blog.dangl.me/archive/executing-nuke-build-scripts-on-linux-machines-with-correct-file-permissions/</link>
      <category>Linux</category>
      <title>Executing NUKE Build Scripts on Linux Machines with Correct File Permissions</title>
      <description>&lt;p&gt;&lt;a data-udi="umb://document/6ecb73fad5484466b0fc73c1a80f79c3" href="/archive/escalating-automation-the-nuclear-option/" title="Escalating Automation - The Nuclear Option"&gt;I've blogged before&lt;/a&gt; about the &lt;a href="http://www.nuke.build/" title="NUKE Build - Homepage"&gt;NUKE build system&lt;/a&gt;, a great tool written by Matthias Koch that allows you to define all your build steps in a C# project, complete with full IDE integration and lots of helpers to get you started quickly. Since work on NUKE has begun almost three years ago, it's grown to be a fully featured build system with a great community. We're using it in all of our projects at Dangl&lt;strong&gt;IT&lt;/strong&gt;, and the experience so far has been great!&lt;/p&gt;
&lt;p&gt;One thing, though, that's always bugged us was that when we invoked the build on Linux machines, we always had to explicitly prefix it with the shell command, like &lt;span class="Code"&gt;bash build.sh Test -Configuration Debug&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;We could never directly invoke the build scripts comfortably on Linux, no matter if we were using a project that has had NUKE added over 2 years ago or for a fresh install. First, we thought it might have been related to line ending differences, since all our development happens on Windows machines. That could quickly be checked, and it wasn't the case - checkout worked as expected on Linux.&lt;/p&gt;
&lt;p&gt;After some digging to find out what we were doing wrong, it turned out that the culprit was the executable permissions of the scripts. If you're on Linux or Mac, git will recognize scripts as executable files and mark them as such. If you're on Windows, however, they're simply added as regular files. The answer from &lt;a href="https://stackoverflow.com/questions/40978921/how-to-add-chmod-permissions-to-file-in-git" title="StackOverflow - How to add chmod permissions to file in GIT?"&gt;Antwane at StackOverflow&lt;/a&gt; explains it quite nice, but in short, you just set the chmod permission via git: &lt;span class="Code"&gt;git update-index --chmod=+x build.cmd​&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Before, we saw that the file permissions were 644, or read/write:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://blog.dangl.me/media/1193/permissions_wrong_small.png" alt="Wrong file permissions in git for executable scripts" data-udi="umb://media/840ef114ac9f47159b65596a73f9d2b7" /&gt;&lt;/p&gt;
&lt;p&gt;After updating the index, we get the correct permissions, including executable 755:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://blog.dangl.me/media/1194/permissions_correct_small.png" alt="Build script permissions in git with executable enabled" data-udi="umb://media/6c1788f780384872b7b48697572d1ad0" /&gt;&lt;/p&gt;
&lt;p&gt;After you've done that, commit your changes. Now, your scripts should also work on Linux as expected. At least, &lt;a href="https://github.com/GeorgDangl/Dangl.SevDeskExport/runs/449039079?check_suite_focus=true" title="GitHub - Dangl.SevDeskExport - Actions Output" data-anchor="?check_suite_focus=true"&gt;it worked for us&lt;/a&gt;!&lt;/p&gt;
&lt;p&gt;Happy building!&lt;/p&gt;</description>
      <pubDate>Sun, 16 Feb 2020 16:04:04 Z</pubDate>
      <a10:updated>2020-02-16T16:04:04Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1425</guid>
      <link>https://blog.dangl.me/archive/cancel-obsolete-http-requests-in-rxnet-with-the-switch-operator/</link>
      <category>DotNet</category>
      <title>Cancel Obsolete Http Requests in Rx.NET with the Switch Operator</title>
      <description>&lt;p&gt;Using Observables in Rx.NET is a super fun and efficient way to organize data flows in your applications. While it's widely used for front end development - it's basically a core part of Angular - it's actually a great library to use in any projects where you're using loosely coupled services and state management.&lt;/p&gt;
&lt;p&gt;One of these use cases is having a filtered list. Imagine you've got a small app that shows you some table, along with a single text field for filtering. Imagine also that this list is fed from some asynchronous source, so it's not an in memory filter operation but maybe the response from a Http call.&lt;/p&gt;
&lt;p&gt;Now, when you start typing in this box, what should happen? In the simplest case, without delays or checks, every letter typed in that filter box sends a Http request. That's probably a bit of an unoptimized approach, but it should work. This could look something like the following:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="3e87c41e0abf3b9ec79b99d7ad8db8e0" data-gist-file="first_approach.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;What's actually happening is that you're creating a stream of events in your code. Whenever you change the url, you're sending a request. When that request finishes, you're updating the list. The problem now comes when the requests don't come in the order they've been sent. Your user might have typed &lt;em&gt;pizza&lt;/em&gt;, deleted that and then again typed &lt;em&gt;pasta&lt;/em&gt;. If the first request, for really any reason, was delayed, you might have the situation where you're sowing &lt;em&gt;pizza&lt;/em&gt; results to &lt;em&gt;pasta &lt;/em&gt;guys!&lt;/p&gt;
&lt;p&gt;To resolve this, you want to track which requests have been sent and discard all but the latest one. In rxjs, you'd be using the &lt;span class="Code"&gt;switchMap&lt;/span&gt; operator. In Rx.NET, it's simply called &lt;span class="Code"&gt;Switch&lt;/span&gt; and expects you to either provide a &lt;span class="Code"&gt;Task&lt;/span&gt; or another &lt;span class="Code"&gt;IObservable&lt;/span&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="3e87c41e0abf3b9ec79b99d7ad8db8e0" data-gist-file="with_cancellation.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;What's happening here is that only the latest call will come through. So when you've got more than one request running at the same time, you're never again getting stale data.&lt;/p&gt;
&lt;p&gt;If you want to dive deeper into this, you can take a look at &lt;a href="https://github.com/GeorgDangl/LightQuery/blob/dev/src/LightQuery.Client/PaginationBaseService.cs#L104-L138" title="GitHub - GeorgDangl - LightQuery - PaginationBaseService" data-anchor="#L104-L138"&gt;the source code for LightQuery&lt;/a&gt;, which uses this approach in conjunction with &lt;span class="Code"&gt;CancellationToken&lt;/span&gt;s. I'd also like to thank Brandon for &lt;a href="https://stackoverflow.com/questions/17836743/how-to-cancel-a-select-in-rx-if-it-is-not-finished-before-the-next-event-arrives#answer-17837998" title="StackOverflow - How to cancel a Select in RX if it is not finished before the next event arrives" data-anchor="#answer-17837998"&gt;providing a great answer at StackOverflow&lt;/a&gt;, which really goes into detail!&lt;/p&gt;
&lt;p&gt;Hope you're going to have fun with Rx.NET!&lt;/p&gt;</description>
      <pubDate>Thu, 30 Jan 2020 19:51:54 Z</pubDate>
      <a10:updated>2020-01-30T19:51:54Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1424</guid>
      <link>https://blog.dangl.me/archive/accessing-the-revit-api-in-callbacks-from-other-threads/</link>
      <category>DotNet</category>
      <title>Accessing the Revit API in Callbacks from Other Threads</title>
      <description>&lt;p&gt;Just this week, I've been doing some work with a plugin for Autodesk Revit. The general idea of the plugin is to show a window which basically just wraps a Chromium embedded browser to display a web UI and interact with the Revit API. The interaction itself is pretty straightforward - with &lt;a href="https://github.com/cefsharp/CefSharp" title="GitHub - cefsharp - CefSharp"&gt;CefSharp&lt;/a&gt;, you can bind JavaScript objects in the browser to .NET objects in the plugin and thus access Revit features.&lt;/p&gt;
&lt;p&gt;We've implemented that by having a singleton on the .NET side that listens to commands from the browser. The first, possibly naive approach, was to simply call our code whenever we got a message:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="b40a23b441be1575ac169d99d11e9f44" data-gist-file="RevitApi_Throws.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;However, this didn't work. While all the data was transferred correctly, whenever we accessed certain features of the Revit API, we got a &lt;span class="Code"&gt;SEHException&lt;/span&gt; from unmanaged code in Revit. A quick Google search yielded this bit of info: &lt;em&gt;"SEHExceptions are always indications of a bug in Revit."&lt;/em&gt;. Ok, we thought, but that's super basic stuff we're accessing, there's probably not an undiscovered bug with Revit here.&lt;/p&gt;
&lt;p&gt;Luckily, debugging this gave us a bit more info: The API is working correctly when accessed from the main thread, but throws when accessed by another one. So even though we weren't doing any UI operations, this seemed to be the root cause of our issue. Since you also can't invoke something on the main thread in Revit, with &lt;span class="Code"&gt;Application.Current.Dispatcher&lt;/span&gt; not being set, we fell back to using the &lt;span class="Code"&gt;UIApplication.Idling&lt;/span&gt; event and a stack to invoke our methods on the main thread:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="b40a23b441be1575ac169d99d11e9f44" data-gist-file="RevitApi_Correct.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This cost us a few hours, so I hope it's helpful to others!&lt;/p&gt;
&lt;p&gt;Happy integrating!&lt;/p&gt;</description>
      <pubDate>Tue, 21 Jan 2020 11:41:25 Z</pubDate>
      <a10:updated>2020-01-21T11:41:25Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1423</guid>
      <link>https://blog.dangl.me/archive/handling-datetimeoffset-in-sqlite-with-entity-framework-core/</link>
      <category>DotNet</category>
      <title>Handling DateTimeOffset in SQLite with Entity Framework Core</title>
      <description>&lt;p&gt;Recently, ASP.NET Core 3.0 was released, along with all it's supporting libraries like Entity Framework Core. In the process of &lt;a href="https://www.dangl-it.com/products/danglidentity/" title="Dangl IT GmbH - Dangl.Identity"&gt;migrating Dangl.Identity&lt;/a&gt; over to the new version, I discovered that some integration tests failed with this message:&lt;/p&gt;
&lt;p&gt;&lt;span class="Code"&gt;System.NotSupportedException : SQLite cannot order by expressions of type 'DateTimeOffset'. Convert the values to a supported type or use LINQ to Objects to order the results.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;The error message is pretty clear - SQLite with Entity Framework Core 3.0 does no longer support some operations when using &lt;span class="Code"&gt;DateTimeOffset&lt;/span&gt; properties in database models, as specified in the &lt;a href="https://docs.microsoft.com/en-us/ef/core/providers/sqlite/limitations#query-limitations" title="Microsoft Docs - SQLite Limitations" data-anchor="#query-limitations"&gt;official Microsoft Guidelines on limitations with SQLite&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The recommendation to switch to a supported type is great, but what to use? Falling back to regular &lt;span class="Code"&gt;DateTime&lt;/span&gt;, you'll lose the time zone information. Even if you're storing only UTC dates, while ensuring you're never making an error anywhere you touch dates, Entity Framework will always return a &lt;span class="Code"&gt;DateTimeKind.Unspecified&lt;/span&gt; when retrieving values from the database. While you can work around that with some conversion via an &lt;span class="Code"&gt;EntityMaterializerSource&lt;/span&gt;, this feels awkward and error prone.&lt;/p&gt;
&lt;p&gt;Luckily, aptly named &lt;a href="https://github.com/aspnet/EntityFrameworkCore/issues/10784#issuecomment-415769754" title="GitHub - EntityFrameworkCore - DateTimeOffset issue" data-anchor="#issuecomment-415769754"&gt;user bugproof on GitHub posted a great snippet&lt;/a&gt; that attaches a built-in converter for all properties in your database model. Here's how I've implemented it in my database context class:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="b90370124720ed8fed9539509aafd155" data-gist-file="DatabaseContext.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The only drawback is that the conversion only supports up to millisecond precision, but for most uses cases this is likely not a problem. In case you're comparing values, simply trim the last three digits from your values in your test code and you're good to go:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="b90370124720ed8fed9539509aafd155" data-gist-file="DateTimeOffsetTickSubtraction.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Happy modelling!&lt;/p&gt;</description>
      <pubDate>Tue, 15 Oct 2019 12:07:18 Z</pubDate>
      <a10:updated>2019-10-15T12:07:18Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1420</guid>
      <link>https://blog.dangl.me/archive/performing-end-to-end-tests-for-a-net-core-web-api-with-an-angular-frontend-in-docker-with-jenkins/</link>
      <category>Web Development</category>
      <title>Performing End-to-End Tests for a .NET Core Web API with an Angular Frontend in Docker with Jenkins</title>
      <description>&lt;p&gt;If you've been following my blog in the last few years, you might have noticed that I kind of like testing &amp;amp; automation. Mostly, I'm working with .NET and TypeScript in cloud and backend services, so plain old integration testing gets me pretty far. Recently, however, I've done a bit more on the frontend side and I wasn't happy with the way I setup my testing.&lt;/p&gt;
&lt;p&gt;Basically, I had separate tests for the frontend and backend, with just very rudimentary end to end tests that were run on a pre-production slot in Azure. If these passed, the slot was switched live and a new version was deployed.&lt;/p&gt;
&lt;p&gt;This approach worked, but had three big drawbacks:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It was slow. The app had to be built and deployed to the cloud. Results from this could take up to 10 minutes until I had feedback&lt;/li&gt;
&lt;li&gt;It couldn't be run locally, since it really was more of a pre-deployment check than an actual test. This also meant I didn't get feedback on all code changes, only on those in a branch that got deployed&lt;/li&gt;
&lt;li&gt;Testing was limited to operations that didn't change any production data, such as creating new users or assigning roles&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It was a really uncomfortable situation to be in, and the solution was clear: Automate it, dockerize everything and have full end to end tests run on every single commit in a headless Chrome browser. It took me a bit to get started with all this, so in this article, I'll describe in detail how I did it.&lt;/p&gt;
&lt;h2&gt;What are end-to-end tests?&lt;/h2&gt;
&lt;p&gt;In automated testing, there are different types or categories of tests you write. The definitions are always a bit fuzzy, everyone has a different opinion on them. The most fundamental tests are called unit tests, checking the behavior of a small, independent unit. Integration tests verify that multiple components in conjunction work as expected, usually with a longer run time than unit tests. Finally, automated end-to-end or e2e tests are performed on the whole application, from a users perspective. Think browser or UI automation.&lt;/p&gt;
&lt;p&gt;You should call them what's most appropriate in your situation. For example, is an automated test checking the REST API endpoints of your backend, hooked up to a local database, still an integration test or already end-to-end? I'd say it's an integration test, since your users typically don't access the API directly. Except when they're developers themselves and do... So, don't get caught up in semantics😊&lt;/p&gt;
&lt;p&gt;For this article, end-to-end tests are tests that simulate the users behavior in a browser, they test both the frontend code and the backend together, simulating real user actions on your production system.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;This post won't go into some details that are out of the focus here. I'll assume you have &lt;a data-udi="umb://document/8e45943c37d9451894adcf73e2449f5c" href="/archive/installing-and-configuring-jenkins-on-windows-server-with-iis/" title="Installing and Configuring Jenkins on Windows Server with IIS"&gt;some understanding of using Jenkins&lt;/a&gt; to automate your tests (or any other CI service, like Azure DevOps or AppVeyor), are familiar with Docker as well as basic unit testing approaches and frameworks such as xUnit. Additionally, I'm using &lt;a data-udi="umb://document/6ecb73fad5484466b0fc73c1a80f79c3" href="/archive/escalating-automation-the-nuclear-option/" title="Escalating Automation - The Nuclear Option"&gt;the NUKE build system&lt;/a&gt; to orchestrate the setup, but simple bash or PowerShell commands will be perfectly fine for you.&lt;/p&gt;
&lt;h2&gt;Docker Setup&lt;/h2&gt;
&lt;p&gt;The easiest way for me was to run all required services in Docker Compose. Let's take a quick look at the following script:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="99d4649000834d825a5f717834a4dca4" data-gist-file="docker-compose.yml"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;My setup consists of three images:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Dangl.Identity, which is the app under test. It's &lt;a href="https://identity.dangl-it.com" title="Dangl.Identity"&gt;an OpenID server with user, rights &amp;amp; role management and related services&lt;/a&gt;, built on IdentityServer4, that we're using at Dangl&lt;strong&gt;IT&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Dangl.Icons, which is a really small service that's similar to Gravatar, it essentially generates user icons&lt;/li&gt;
&lt;li&gt;SQL Server, to match the setup of the production environment&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The setup is app specific and likely quite different in your case, but the gist is that we'll have a full featured app with all required services, mirroring the production environment, available at &lt;em&gt;http://localhost:&lt;/em&gt;&lt;span&gt;&lt;em&gt;44848&lt;/em&gt; when running this composition.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Test Configuration&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span&gt;To actually perform the test, a mix of technologies is used: While xUnit is the test framework itself, I'm using Selenium for browser automation. A minimal .csproj looks like this:&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="99d4649000834d825a5f717834a4dca4" data-gist-file="Dangl.Identity.E2E.Tests.csproj"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;The main object you'll be using for e2e tests is the &lt;span class="Code"&gt;ChromeDriver&lt;/span&gt;. It's your context for interacting with the browser and really easy to setup:&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="99d4649000834d825a5f717834a4dca4" data-gist-file="E2eTestsBase.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;You'll notice that I check if the debugger is attached to check if I'm running it locally in Visual Studio, otherwise I set it to headless and simulate a Full HD resolution. This example class makes use of xUnits &lt;em&gt;Fixture&lt;/em&gt;, but that's just for performing some initialization for the test environment, you can skip that.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Test Implementation&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span&gt;Now you're ready to write tests! I'm usually separating them again at this point, one category represents simple, minimal tests while the other covers &lt;em&gt;user stories&lt;/em&gt;, e.g. complex tasks from registration to login to performing some tasks on the site.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;Let's take a look at a simple test, checking if the UI disables the &lt;em&gt;Register&lt;/em&gt; button when the user missed filling in her username:&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="99d4649000834d825a5f717834a4dca4" data-gist-file="RegistrationPage.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;The code should be simple to follow, I hope😊 More complex tests aren't much different, it's just more code. You'll generally notice a trend here - while unit tests require very little code, e2e tests quickly grow into dozens of lines if you're doing more complex actions.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Running your Tests&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;Depending on how you're automating your pipeline, you're using bash or PowerShell or something similar. I'm a huge fan of the &lt;a data-udi="umb://document/6ecb73fad5484466b0fc73c1a80f79c3" href="/archive/escalating-automation-the-nuclear-option/" title="Escalating Automation - The Nuclear Option"&gt;NUKE build system&lt;/a&gt;, so that's what I naturally use for all my automation needs. Let's take a look at the implementation:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="99d4649000834d825a5f717834a4dca4" data-gist-file="Build.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Examining it closely, it does just two things: It runs &lt;span class="Code"&gt;docker-compose up&lt;/span&gt; to start the Docker environment in a non-blocking way and then executes &lt;span class="Code"&gt;dotnet test&lt;/span&gt;. xUnit test results are both printed to the console and saved as &lt;span class="Code"&gt;testresults.xml&lt;/span&gt; for further analyzing.&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Running Tests in Jenkins&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;The Jenkins pipeline is configured via a &lt;em&gt;Docker E2E Tests&lt;/em&gt; build step. It's configured to run on a Linux node, executes NUKE with the &lt;span class="Code"&gt;E2ETests&lt;/span&gt; target and then reports the test results:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="99d4649000834d825a5f717834a4dca4" data-gist-file="Jenkinsfile"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You'll see it's worth the effort when a nice, all-green stage view greets you in Jenkins:&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1192/jenkins_e2e_pipeline_view.png" alt="Jenkins Pipeline View for End to End Tests" data-udi="umb://media/6e81b6d1482c4c91853abd9861166a5e" /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span&gt;Summary&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;While there are quite a few steps involved in performing proper end-to-end testing, the payoff is worth it: You're able to validate a mirror of your production setup in exactly the same way your users are using the app. While this post has just scratched the surface of what you can do, it hopefully gave you a good starting point to implement your own strategy.&lt;/p&gt;
&lt;p&gt;Happy testing!&lt;/p&gt;</description>
      <pubDate>Fri, 26 Jul 2019 18:12:04 Z</pubDate>
      <a10:updated>2019-07-26T18:12:04Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1419</guid>
      <link>https://blog.dangl.me/archive/impact-of-o-n-runtime-in-practice-with-before-after-results/</link>
      <category>DotNet</category>
      <title>Impact of O(n²) Runtime in Practice - With Before &amp; After Results</title>
      <description>&lt;p&gt;With &lt;a href="https://www.dangl-it.com/products/avacloud-gaeb-saas/" title="Dangl IT GmbH - AVACloud"&gt;AVACloud&lt;/a&gt;, we're providing a hosted SaaS service for our &lt;a href="https://www.dangl-it.com/products/gaeb-ava-net-library/" title="Dangl IT GmbH - GAEB &amp;amp; AVA .Net Libraries"&gt;GAEB &amp;amp; AVA .Net Libraries&lt;/a&gt;, a project calculation module &amp;amp; data exchange format for the construction industry.&lt;/p&gt;
&lt;p&gt;Among the paid service, there are some free offerings for AVACloud for limited use cases. With that, users often convert from the various data formats to Excel. In our monitoring, we noticed some irregularities from the free service. Conversions usually take just tens or a few hundreds of milliseconds. But we had some that took about half an hour, way to long for any one user to be happy.&lt;/p&gt;
&lt;p&gt;Luckily, we've been contact by the user and were asked for assistance. After a bit of investigation and profiling, we quickly found the culprit - we had &lt;strong&gt;a part in the conversion process that had O(n²) runtime:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1187/folie-01.png" alt="O(n²) algorithmic runtime in real life scenarios" data-udi="umb://media/2a8e82224a894533bb3f8adcaee30bd9" /&gt;&lt;/p&gt;
&lt;p&gt;The graph above shows the run time of the complete conversion process by the amount of elements within a parent container. It's a pretty nice O(n²) graph, at least visually, not so much for actual users😊&lt;/p&gt;
&lt;p&gt;The thing is, this wasn't really noticed anytime before. In practice, service specifications for construction projects look like this:&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1190/project_structure.png" alt="Project structure of a service specification" data-udi="umb://media/35fa001899464104b06505354892f6cf" /&gt;&lt;/p&gt;
&lt;p&gt;So while real construction projects often have up to a few hundred of positions, sometimes even a few thousand, they're structured in a tree so that each container itself only has at most tens of child elements.&lt;/p&gt;
&lt;p&gt;The offending part of the code actually worked on the level of a single container. It did perform a validity check when adding a new element, by checking against all siblings. This wasn't a problem until one day, a customer had a single container with over 40.000 elements that really triggered this...&lt;/p&gt;
&lt;p&gt;The fix was rather easy, it basically just amounted to introducing a cache. And thus, the performance quickly became linear:&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1188/folie-02.png" alt="Linear performance after switching from O(n²) to O(n) algorithm" data-udi="umb://media/90d69af56d96464d88e53fc38089f10c" /&gt;&lt;/p&gt;
&lt;p&gt;In that case, the dotted red line is the previous, O(n²) runtime, while blue represents the situation after the fix. Even more impressive is a zoomed view, which shows just how massive the difference from such an oversight can be:&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1189/folie-03.png" alt="Zoomed view of new runtime behavior" data-udi="umb://media/8d89502da76e410ba38686a5905fc2f8" /&gt;&lt;/p&gt;
&lt;p&gt;In hindsight, the error was very obvious. It wasn't noticed because typical data sets were always small enough so this never really was a problem, until it one day was... Luckily, the fix could be implemented very quickly, and I got a satisfied user and some material for a blog post out of it😊&lt;/p&gt;
&lt;p&gt;Happy optimizing!&lt;/p&gt;</description>
      <pubDate>Fri, 14 Jun 2019 13:44:33 Z</pubDate>
      <a10:updated>2019-06-14T13:44:33Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1411</guid>
      <link>https://blog.dangl.me/archive/recap-from-the-bcf-hackathon-in-helsinki-and-formation-of-the-cde-group/</link>
      <category>BIM</category>
      <title>Recap from the BCF Hackathon in Helsinki and Formation of the CDE Group</title>
      <description>&lt;p&gt;This past week, the BCF implementers group came together at the &lt;a href="https://www.solibri.com/" title="Solibri"&gt;Solibri HQ in Helsinki, Finland&lt;/a&gt;, for their bi-annually Hackathon. Besides the regular conference calls, this is a great time to discuss all things related to the &lt;strong&gt;B&lt;/strong&gt;IM &lt;strong&gt;C&lt;/strong&gt;ollaboration &lt;strong&gt;F&lt;/strong&gt;ormat, a standard that should make it easier for users to collaborate on planning &amp;amp; building construction projects across teams. It's solving a key issue in &lt;a data-udi="umb://document/2d15ad2bf53749ad90d7ec471cf93854" href="/archive/what-is-bim/" title="What is BIM?"&gt;the whole BIM process&lt;/a&gt; - the integration of various professionals and tools, often across companies and disciplines.&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1186/bcf-hackathon-group-foto-helsinki-2019_sm.jpg" alt="" data-udi="umb://media/cededa7329df4bac92ed19bb9acf2f2d" /&gt;&lt;/p&gt;
&lt;blockquote&gt;Photo © &lt;a href="http://www.oliverguenther.de/" title="Oliver Günter"&gt;Oliver Günter&lt;/a&gt;, from left to right: Georg Dangl, Oliver Günter, Pieter Buts, Josephus Meulenkamp, Andrea Dallera, Eduard Mrazek, Pasi Paasiala, Henning Kongsgård, Jari Juntunen, Kristof Kerekes, Veni Lillkåll, Simon Daum, Nick Tindall, Rahul Sule&lt;/blockquote&gt;
&lt;h2&gt;Past &amp;amp; Future of BCF&lt;/h2&gt;
&lt;p&gt;BCF currently stands at version 2.1, and it's available both &lt;a href="https://github.com/buildingSMART/BCF-XML" title="GitHub - buildingSMART - BCF-XML"&gt;as a file-based standard&lt;/a&gt; as well as an &lt;a href="https://github.com/buildingSMART/BCF-API" title="GitHub - buildingSMART - BCF-API"&gt;Http REST API specification&lt;/a&gt;. It's quite stable now and feedback from production shows little problems, except some minor issues every now and then.&lt;/p&gt;
&lt;p&gt;Support by applications is steadily increasing, but still mostly focused on the file based Xml exchange format. One of the big goals of the group is to support implementations of an API-based workflow. While there are many options already on the server side, there are still way too few client applications available that use the BCF REST API. For this year, at least, &lt;a href="https://github.com/opf/BCFier/releases" title="GitHub - opf - BCFier"&gt;the open source tool BCFier&lt;/a&gt; is planned to get some BCF API features implemented.&lt;/p&gt;
&lt;p&gt;Another point is to get more end users involved. The BCF Group has always been a very technical group, and most of the standardization efforts were driven by software vendors. This caused long feedback cycles between the group and end users. Now that the standard is more stable, we're actively trying to include more designers, contractors and users into the decision processes. Starting with regional buildingSMART clusters, we're always open to any new members or input.&lt;/p&gt;
&lt;h2&gt;Formation of the CDE Group&lt;/h2&gt;
&lt;p&gt;Something that's been discussed lately is the formation of a group that focuses on standardizing APIs meant for &lt;strong&gt;C&lt;/strong&gt;ommon &lt;strong&gt;D&lt;/strong&gt;ata &lt;strong&gt;E&lt;/strong&gt;nvironments. Similar to what's being done with BCF, successfully utilizing a CDE in construction projects requires that the users - designers, contractors, stakeholders - are able to integrate their own tools in the collaborative workflow.&lt;/p&gt;
&lt;p&gt;Common Data Environments are mostly cloud based applications that aggregate and manage data from various sources. Simply said, they're tools that support all or most digital aspects of the construction process in one central place. For this to work, it needs to integrate with existing workflows and applications.&lt;/p&gt;
&lt;p&gt;The very basic roadmap for this group is as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The standard for authenticating API requests, part of the BCF API, will be extracted and made its own specification&lt;/li&gt;
&lt;li&gt;A basic API for managing documents between applications and CDEs&lt;/li&gt;
&lt;li&gt;Further down, some kind of &lt;em&gt;directory service&lt;/em&gt; is needed, to discover projects, manage users and roles across systems&lt;/li&gt;
&lt;li&gt;In the long term, this group might also extend the web based APIs to support object and attribute level access in the context of digital building models (BIM)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The decision to form the group was made by Catenda, Graphisoft, Oracle Aconex, Solibri and think project! at or before the Hackathon in Helsinki. It will be part of the Technical Room at buildingSMART International Standards Program, and there have been already some early discussions about the document management part, &lt;a href="https://github.com/buildingSMART/OpenCDE-API" title="GitHub - buildingSMART - OpenCDE-API"&gt;published on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;</description>
      <pubDate>Sun, 19 May 2019 10:29:31 Z</pubDate>
      <a10:updated>2019-05-19T10:29:31Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1407</guid>
      <link>https://blog.dangl.me/archive/transform-your-aspnet-core-website-into-a-single-file-executable-desktop-app/</link>
      <category>DotNet</category>
      <title>Transform your ASP.NET Core Website into a Single File Executable Desktop App</title>
      <description>&lt;p&gt;Using web technologies to create desktop apps has been around for quite some time now, with frameworks like &lt;a href="https://electronjs.org/" title="Electron"&gt;Electron&lt;/a&gt; being mature and used in lots of products, such as Spotify and Slack. I've only ever used it for side projects, until finally this week a customer asked me if I could provide a desktop application of our popular &lt;a data-udi="umb://document/83feb7e86c454aeab940a924425cc23a" href="/archive/what-is-gaeb/" title="What is GAEB?"&gt;Web&lt;strong&gt;GAEB&lt;/strong&gt; converter&lt;/a&gt;. Of course I could! So I sat down at my computer and made an ASP.NET Core MVC app into a desktop app, bundled as a single executable file.&lt;/p&gt;
&lt;p&gt;This approach works by bundling everything together - your frontend, sometimes even your ASP.NET Core backend server and it's own embedded browser. This has been a bit polarizing in the last years among various communities. It's great for developer productivity, but it's also a very wasteful way to use a computers resources.&lt;/p&gt;
&lt;p&gt;However, I'm leaning heavily towards productivity to be able to quickly build features our users care about. I could have built a dedicated .NET WPF app from scratch, or I could just use Electron and be done in two hours. That's what I'd like to share with you in this blog post.&lt;/p&gt;
&lt;h2&gt;Build Script&lt;/h2&gt;
&lt;p&gt;The following is the complete build target for &lt;a data-udi="umb://document/091dd97b312a4a0a88fe8dc04c00a0f6" href="/archive/using-webdeploy-via-nuke-build-scripts/" title="Using WebDeploy via Nuke Build Scripts"&gt;Nuke Build&lt;/a&gt;. You're free to use something else to orchestrate your build, but even if you're unfamiliar with this tool, you'll see that it's quite straightforward:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="13b4f4674529772f3bf0d3ced5a6677e" data-gist-file="PublishElectronApp.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I'll explain the script step-by-step, but I will skip the trivial parts, like copying the files to the output directory. If anything is unclear, please leave a comment!&lt;/p&gt;
&lt;h2&gt;Configure Electron.NET&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/ElectronNET/Electron.NET" title="Electron.NET"&gt;Electron.NET&lt;/a&gt; is an integration for the the Electron package with ASP.NET Core. It's got a great documentation at it's project site, so I'm not going to deeply into the details.&lt;/p&gt;
&lt;p&gt;When you set up your ASP.NET Core project, you're using the Electron.NET package to instrument the interaction with the Electron shell. The result will be a self contained deployment that bundles the Kestrel server into your app.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="13b4f4674529772f3bf0d3ced5a6677e" data-gist-file="PublishElectronApp.cs" data-gist-line="11"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The Dangl.WebGAEB solution uses a &lt;span class="Code"&gt;Standalone&lt;/span&gt; configuration which sets up some things differently in contrast to the web deployment. For example, I'm using embedded services instead of calling a REST API. This heavily depends on your app as a whole and is not covered in this post.&lt;/p&gt;
&lt;h2&gt;Build a Single Executable File with Warp&lt;/h2&gt;
&lt;p&gt;While Electron itself is cool and all, it's Node.js heritage really shows: After the app is created, you'll have a folder with tons of files. This isn't a problem if you're doing some big business app with an installer, but I simply want a single executable file, &lt;span class="Code"&gt;.exe&lt;/span&gt; for Windows in my case. I remembered the &lt;a href="https://github.com/dgiagio/warp" title="GitHub - Warp"&gt;Warp project&lt;/a&gt;, which I've read about some time ago. This tool takes a folder, bundles it into a single executable file and lets you specify an entry point.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="13b4f4674529772f3bf0d3ced5a6677e" data-gist-file="PublishElectronApp.cs" data-gist-line="24-36"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Change the Generated Exe to Hide its Console Window&lt;/h2&gt;
&lt;p&gt;One drawback to this approach is that warp internally starts your ASP.NET Core application, which itself is a console host. The final Electron window is then again launched from your ASP.NET Core server. This means there will be an additional console window present, besides your app. This won't fly with users, so it has to be disabled.&lt;/p&gt;
&lt;p&gt;Truth is, I'm not really familiar with how this works. But there's some metadata in executable files on Windows. One of them is whether or not an &lt;span class="Code"&gt;.exe&lt;/span&gt; should show it's console host. Unfortunately, this isn't configurable until .NET Core 3.0, so we have to find another way. Fortunately, there's a great wiki page on the &lt;a href="https://github.com/AvaloniaUI/Avalonia/wiki/Hide-console-window-for-self-contained-.NET-Core-application" title="GitHub - AvaloniaUI"&gt;AvaloniaUI GitHub page&lt;/a&gt;. It's explain how the &lt;span class="Code"&gt;editbin.exe&lt;/span&gt; utility, which is included in Visual Studio, can be used to patch your executable. So just go find &lt;span class="Code"&gt;editbin.exe&lt;/span&gt; in your Visual Studio installation folder and you're good to go!&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="13b4f4674529772f3bf0d3ced5a6677e" data-gist-file="PublishElectronApp.cs" data-gist-line="38-46"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Upload your Artifacts&lt;/h2&gt;
&lt;p&gt;Your final result will be a single executable file. Which contains your ASP.NET Core backend, the Kestrel webserver, your frontend, and last but not least, a full web browser. I see where the critics saying this is wasteful are coming from, but it just works amazingly well😊&lt;/p&gt;
&lt;p&gt;This example is pushing the artifacts, along with some documentation, to &lt;a href="https://docs.dangl-it.com/" title="DanglDocu - Dangl IT GmbH"&gt;Dangl&lt;strong&gt;Docu&lt;/strong&gt;&lt;/a&gt;. There, my customers get notified about new releases and can download it. For you, that's probably different!&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="13b4f4674529772f3bf0d3ced5a6677e" data-gist-file="PublishElectronApp.cs" data-gist-line="51-57"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;The Result&lt;/h2&gt;
&lt;p&gt;You can check out the &lt;a href="https://www.web-gaeb.de/" title="WebGAEB - Dangl IT GmbH"&gt;online version of Web&lt;strong&gt;GAEB&lt;/strong&gt;&lt;/a&gt;. It's basically the same as the &lt;a data-udi="umb://document/26371a3f96164b2eb78ef953e792764a" href="/gaeb-converter/" title="GAEB Converter"&gt;GAEB converter&lt;/a&gt; on this site, except it's in German. The cleverly named "Desktop Edition" looks like this and runs 100% locally:&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1184/webgaebdesktop_sm.png" alt="" data-udi="umb://media/f7d170f5460c4c0b88f7cdba2060f7f3" /&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Happy packaging!&lt;/p&gt;</description>
      <pubDate>Thu, 21 Mar 2019 16:34:07 Z</pubDate>
      <a10:updated>2019-03-21T16:34:07Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1404</guid>
      <link>https://blog.dangl.me/archive/dangl-it-at-the-gruenderpreis-rosenheim-2019-founders-price-finals/</link>
      <category>Business</category>
      <title>Dangl IT at the Gründerpreis Rosenheim 2019 (Founders Price) Finals</title>
      <description>&lt;p&gt;I've grown up and lived ever since in &lt;a href="https://www.rosenheim.de/" title="Rosenheim.de"&gt;Rosenheim&lt;/a&gt;, a beautiful, small city right at the foot of the alps. It's always been an economic center for the region, and in recent years, digital industries and services have increased a lot.&lt;/p&gt;
&lt;p&gt;This is supported by the city and county. Among lots of offerings for businesses, they're hosting the bi-annual &lt;a href="https://www.gruenderpreis-rosenheim.de/" title="Gründerpreis Rosenheim"&gt;Gründerpreis Rosenheim (Founders Price)&lt;/a&gt;. This competition is aimed to support local startups and small businesses to gain traction. Although it's not strictly aimed for digital only, over half of the participants in the current round are from this group.&lt;/p&gt;
&lt;p&gt;Over the last months, many events have taken place where founders could learn the basics of starting and operating a successful business. Courses included topics such as finances, legal, but also marketing, corporate design and how to write a convincing business plan.&lt;/p&gt;
&lt;p&gt;My company, &lt;a href="https://www.dangl-it.com" title="Dangl IT GmbH"&gt;Dangl&lt;strong&gt;IT&lt;/strong&gt; GmbH&lt;/a&gt;, which was founded just last year, currently takes part. Dangl&lt;strong&gt;IT&lt;/strong&gt; is focusing on providing solutions for the building industry, such as &lt;a href="https://www.dangl-it.com/products/bim-solutions/" title="Dangl IT GmbH - BIM Solutions"&gt;working with digital building models&lt;/a&gt;, &lt;a href="https://www.dangl-it.com/products/gaeb-ava-net-library/" title="Dangl IT GmbH - GAEB &amp;amp; AVA .Net Library"&gt;data exchange and interoperability scenarios&lt;/a&gt; and &lt;a href="https://www.dangl-it.com/products/avacloud-gaeb-saas/" title="Dangl IT GmbH - AVACloud GAEB SaaS"&gt;cloud services&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1182/danglit_optimized_cropped.jpg" alt="Presentation of the Dangl IT GmbH at Gründerpreis Rosenheim 2019" data-udi="umb://media/54d8964f99784ebca7007dba95d7cd33" /&gt;&lt;/p&gt;
&lt;p&gt;Last Friday, the final company presentations of the business plans took place. There, I had the opportunity to present my company to a group of experienced professionals from the region. The slide in the picture shows the weekly active &lt;a href="https://www.dangl-it.com/products/webgaeb/" title="Dangl IT GmbH - WebGAEB"&gt;users of Web&lt;strong&gt;GAEB&lt;/strong&gt;&lt;/a&gt;, where I managed to increase the users from around 200 per week to almost 500 in January (with a small dip around Christmas, where there's little activity for a B2B app).&lt;/p&gt;
&lt;p&gt;Personally, I've learned a lot in the last weeks and made great, new contacts. I've gotten valuable feedback and insights into running a successful business.&lt;/p&gt;
&lt;p&gt;Thanks to everyone involved for making such a great platform possible!&lt;/p&gt;</description>
      <pubDate>Wed, 13 Feb 2019 10:28:00 Z</pubDate>
      <a10:updated>2019-02-13T10:28:00Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1400</guid>
      <link>https://blog.dangl.me/archive/integrating-bim-ifc-references-with-bills-of-quantities-and-gaeb-files/</link>
      <category>BIM</category>
      <title>Integrating BIM &amp; IFC References with Bills of Quantities and GAEB Files</title>
      <description>&lt;p&gt;The creation of fully integrated and connected representations of construction projects with &lt;a data-udi="umb://document/2d15ad2bf53749ad90d7ec471cf93854" href="/archive/what-is-bim/" title="What is BIM?"&gt;Building Information Modelling (BIM)&lt;/a&gt; is becoming ever more important. One key point of BIM is that data should no longer be split into individual, separate &lt;em&gt;data islands&lt;/em&gt; but that everything should be connected. This leads to real, measurable benefits by avoiding duplication, transfer errors and a much better understanding of the building that's designed and operated.&lt;/p&gt;
&lt;p&gt;The BIM process relies greatly on open data formats and a solid, technological foundation in data exchange. Unfortunately, we're still in very early stages of this, so there are many problems in today's industry when different applications &lt;em&gt;talk to each other&lt;/em&gt;. At &lt;a href="https://www.dangl-it.com" title="Dangl IT GmbH"&gt;Dangl&lt;strong&gt;IT&lt;/strong&gt;&lt;/a&gt;, we're tackling this by providing individual software and finished products to deal with that. One of our specialties is the &lt;a data-udi="umb://document/83feb7e86c454aeab940a924425cc23a" href="/archive/what-is-gaeb/" title="What is GAEB?"&gt;German GAEB data standard&lt;/a&gt;, which is a container format used to describe bills of quantities. For building models, the &lt;a href="https://en.wikipedia.org/wiki/Industry_Foundation_Classes" title="Wikipedia - Industry Foundation Classes IFC"&gt;Industry Foundation Classes (IFC)&lt;/a&gt; files are often used to represent buildings in an open and well known data format.&lt;/p&gt;
&lt;p&gt;Many of our customers ask about connecting these GAEB files with the actual building models. To showcase this with a practical example, let's say you've got multiple concrete walls in a building. Often, these walls are represented by different positions in the bill of quantities. An outer wall can be represented as &lt;em&gt;Concrete&lt;/em&gt;, &lt;em&gt;Casing&lt;/em&gt; and &lt;em&gt;Reinforcing Steel&lt;/em&gt;. On the other hand, you will often only have a single position for all the concrete in all the walls in a project, so there's a mismatch between the data in the building model and the data in the bill of quantities - but both represent the same, real entity!&lt;/p&gt;
&lt;p&gt;Linking such cost elements between GAEB and IFC is actually quite easy, and already fully supported with our &lt;a href="https://www.dangl-it.com/products/gaeb-ava-net-library/" title="Dangl IT GmbH - GAEB &amp;amp; AVA .Net Library"&gt;GAEB &amp;amp; AVA .Net Libraries&lt;/a&gt; and &lt;a href="https://www.dangl-it.com/products/avacloud-gaeb-saas/" title="Dangl IT GmbH - AVACloud - GAEB SaaS"&gt;SaaS offering&lt;/a&gt;. In GAEB, there's the concept of &lt;em&gt;CatalogueReferences&lt;/em&gt;, which are typically used to reference some kind of classification but can also hold user defined data. In the example below, the GAEB file (&lt;a data-udi="umb://media/28a095eee1aa4146bd2d10c4a030c0d6" href="https://blog.dangl.me/media/1180/gaebxml.x86" title="GAEBXML.X86"&gt;click here to download&lt;/a&gt;) defines a custom catalog that references the IFC file which contains the building model:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="9b528cf08a2ca3977166115081a48b4e" data-gist-file="bim_definition.xml"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now, a single position element in the GAEB file can reference this catalog. The example below contains all the concrete for the project, but a &lt;em&gt;Quantity Split&lt;/em&gt; links a sub quantity of 10 m³ to an element in the building model with the Id &lt;span&gt;&lt;span class="Code"&gt;04BUoi9FT31BWlyfND2MS2&lt;/span&gt; (&lt;a href="http://www.buildingsmart-tech.org/implementation/get-started/ifc-guid" title="buildingSMART - IFC GUID"&gt;yes, that's how IfcGuids look!&lt;/a&gt;):&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="9b528cf08a2ca3977166115081a48b4e" data-gist-file="bim_reference.xml"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener" href="https://gist.github.com/GeorgDangl/9b528cf08a2ca3977166115081a48b4e#file-gaeb-xml" target="_blank" title="GitHub Gist - GeorgDangl - GAEB XML with IFC reference" data-anchor="#file-gaeb-xml"&gt;For reference, you can view the full GAEB XML file directly in the browser&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Such simple connections between GAEB and IFC give you real value. For example, here's a super simple web app we've put together - you can load the building model and the GAEB file to visualize the connections between them:&lt;/p&gt;
&lt;p&gt;&lt;img style="display: block; margin-left: auto; margin-right: auto;" src="https://blog.dangl.me/media/1181/gaeb_to_bim_example.png" alt="Connecting BIM models with GAEB files in the browser" data-udi="umb://media/cbb159b66e4748fb980851bd9241911a" /&gt;&lt;/p&gt;
&lt;p&gt;To create the GAEB XML file above with our GAEB &amp;amp; AVA .Net Libraries, here's some example code you can use. That's the complete code actually - you don't need a single line of code more than what is shown to produce the output.&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="9b528cf08a2ca3977166115081a48b4e" data-gist-file="GAEBCreation.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If you have any questions about working with GAEB, IFC, BIM or other, related technologies we're always happy to help. Just contact us!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Oh, and there's one big bonus to this approach&lt;/strong&gt;: The next version 3.3 of the GAEB XML standard is scheduled to be published by the second half of 2019. Among the new features is a native support for connecting IFC with GAEB. It's going to be working in basically the same way as the example shown above.&lt;/p&gt;
&lt;p&gt;Happy Connecting!&lt;/p&gt;</description>
      <pubDate>Wed, 16 Jan 2019 20:09:49 Z</pubDate>
      <a10:updated>2019-01-16T20:09:49Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1399</guid>
      <link>https://blog.dangl.me/archive/caching-bundled-and-minified-css-and-javascript-with-the-wrong-file-extension-in-the-free-tier-on-cloudflare-with-umbraco/</link>
      <category>Web Development</category>
      <title>Caching Bundled and Minified CSS and JavaScript with the Wrong File Extension in the Free Tier on CloudFlare with Umbraco</title>
      <description>&lt;p&gt;I've been using &lt;a href="https://www.cloudflare.com" title="CloudFlare"&gt;CloudFlare&lt;/a&gt; for quite some time now, on this blog and on many of my other websites. It's a great service that offers lots of value even in it's free tier. Most useful, for me, is it's built-in CDN (&lt;em&gt;Content Delivery Network&lt;/em&gt;) functionality. This means basically the following:&lt;/p&gt;
&lt;p&gt;This blog here is hosted on an Azure server somewhere in Europe. That's great for people living near, but bad for all the visitors outside of Europe. Or even just a few hundred kilometers away from my server. It's bad in the sense that web requests take a lot of time when the physical locations of client and server are continents apart. Even worse, when the first request to my main page is finished, a second round of requests retrieves images, CSS and JavaScript files. That's where CloudFlare comes into play: They'll cache all my assets on their servers, of which they have many around the world. So they're kind of a proxy between this blog and it's visitors and they optimize delivery by caching my content, meaning typical page loads from North America, for example, no longer take 8 seconds to complete. Most web requests don't even hit my server, at all.&lt;/p&gt;
&lt;p&gt;I'm using the free tier on this site, which works well enough but has some limitations. For example, they're only caching &lt;a href="https://support.cloudflare.com/hc/en-us/articles/200172516-Which-file-extensions-does-Cloudflare-cache-for-static-content-" title="CloudFlare Support - Which file extensions does Cloudflare cache for static content?"&gt;a handful of file extensions&lt;/a&gt;, so &lt;span class="Code"&gt;.js&lt;/span&gt; and &lt;span class="Code"&gt;.css&lt;/span&gt; files are cached but, &lt;span class="Code"&gt;.axd&lt;/span&gt; are not. Since I'm using &lt;a href="https://umbraco.com" title="Umbraco"&gt;Umbraco&lt;/a&gt; on this blog, the built-in &lt;a href="https://github.com/Shazwazza/ClientDependency" title="GitHub - Shazwazza - ClientDependency"&gt;ClientDependency package&lt;/a&gt; makes sure that all my JavaScript and CSS files are bundled and minified in production builds. The package internally works by routing the requests through a custom handler via a route similar to &lt;span class="Code"&gt;/DependencyHandler.axd?s=h4sh&amp;amp;t=JavaScript&lt;/span&gt;. Bundling and minification is great and what we want, but we also want CloudFlare to cache the requests. We'll have to get Umbraco to serve it via urls with the correct file extensions then!&lt;/p&gt;
&lt;p&gt;First, you need to create a custom renderer that outputs the desired file endings in your website links to the bundles:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="a3782d74e339f7ed46666776ccf30def" data-gist-file="NonAxdDependenciesRenderer.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This new renderer should be registered in your &lt;span class="Code"&gt;ClientDependency.config&lt;/span&gt; file. Simply replace the old one:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="a3782d74e339f7ed46666776ccf30def" data-gist-file="ClientDependency.config"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now, for the magic to work, we're using a rewrite rule in IIS' &lt;span class="Code"&gt;web.config&lt;/span&gt; file to map requests that come to either &lt;span class="Code"&gt;DependencyHandler.js&lt;/span&gt; or &lt;span class="Code"&gt;DependencyHandler.css&lt;/span&gt; internally to the route &lt;span class="Code"&gt;DependencyHandler.axd&lt;/span&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="a3782d74e339f7ed46666776ccf30def" data-gist-file="web.config"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Happy caching!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update, one week after the original post:&lt;/strong&gt; With this approach, I managed to increase the cached bandwith via CloudFlare from about 20% to over 60%! Most of the uncached content now is the &lt;a data-udi="umb://document/26371a3f96164b2eb78ef953e792764a" href="/gaeb-converter/" title="GAEB Converter"&gt;dynamic content generated by the GAEB converter&lt;/a&gt;. Virtually everything else doesn't cost me any more bandwith!&lt;/p&gt;</description>
      <pubDate>Fri, 11 Jan 2019 19:47:44 Z</pubDate>
      <a10:updated>2019-01-11T19:47:44Z</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">1398</guid>
      <link>https://blog.dangl.me/archive/remove-duplicate-enum-entries-in-swagger-documents-with-nswag-in-aspnet-core/</link>
      <category>DotNet</category>
      <title>Remove Duplicate Enum Entries in Swagger Documents with NSwag in ASP.NET Core</title>
      <description>&lt;p&gt;&lt;a href="https://github.com/RSuter/NSwag" title="GitHub - RSuter - NSwag"&gt;NSwag&lt;/a&gt; and &lt;a href="https://github.com/RSuter/NJsonSchema" title="GitHub - RSuter - NJsonSchema"&gt;NJsonSchema&lt;/a&gt; are great tools that make it super easy to integrate &lt;a href="https://swagger.io" title="Swagger"&gt;Swagger / OpenAPI&lt;/a&gt; specification documents in your ASP.NET Core apps.&lt;/p&gt;
&lt;p&gt;Basically, you simply define your web server backend and then all your API endpoints and models are automatically available for anyone to generate clients in their favorite language.&lt;/p&gt;
&lt;p&gt;In such scenarios, NSwag is often configured to use a &lt;span class="Code"&gt;StringEnumConverter&lt;/span&gt;, meaning that enum values are meant to be serialized as strings when they're sent over the wire. However, there's one nasty thing that's allowed by the &lt;a href="https://stackoverflow.com/questions/15458101/can-you-have-multiple-enum-values-for-the-same-integer" title="StackOverflow - Multiple Enum Values for the same Integer in C#"&gt;C# specification&lt;/a&gt; when it comes to this:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="af167dd657b8f8b2879f8208341304a4" data-gist-file="HttpStatusCode.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You might actually have two different string representations that both match an internally used integer identifier. That becomes a problem when you're using string serialization for enumerations but track them internally by their integer id.&lt;/p&gt;
&lt;p&gt;In NJsonSchema, for example, &lt;a href="https://github.com/RSuter/NJsonSchema/issues/800" title="GitHub - RSuter - NJsonSchema Issue with StringEnumConverter"&gt;there's a small bug&lt;/a&gt; that would lead to &lt;span class="Code"&gt;TemporaryRedirect&lt;/span&gt; being serialized twice in this case. That's sometimes not a problem, but strongly typed languages like C# or Java will fail to compile the generated client code.&lt;/p&gt;
&lt;p&gt;To solve this, you can simply remove these distinct values in a &lt;span class="Code"&gt;PostProcess&lt;/span&gt; action when you're setting up NSwag for your ASP.NET Core application:&lt;/p&gt;
&lt;p&gt;&lt;code data-gist-id="af167dd657b8f8b2879f8208341304a4" data-gist-file="NSwagPostProcess.cs"&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Happy serializing!&lt;/p&gt;</description>
      <pubDate>Thu, 03 Jan 2019 20:18:36 Z</pubDate>
      <a10:updated>2019-01-03T20:18:36Z</a10:updated>
    </item>
  </channel>
</rss>