If you've read some of my previous articles, you probably know that I'm kind of obsessed with rigorous testing - I'm advocating for having a good, automated quality control for every project I encounter. It's probably one of the biggest time savers there is in software development, whether it's because you catch regressions early or you've got lots of infrastructure set up so that reproducing bugs is a matter of minutes instead of hours.
For one of our apps, we've got a bit of a complex setup. The backend is a traditional ASP.NET Core application with the usual moving parts, a relational database, some blob storage and the compute part of the app itself. That's easy to test on its own.
The frontend part deviates a bit from our regular architectures. Due to the nature of the app, it's built and deployed as an Electron application. Electron is a great tool that I've blogged about previously, but it comes at a cost. It gets the most flak for being inefficient, whether it's due to it having a large memory footprint (both in storage and memory), or being less fast than native applications. However, the upsides are great - since Electron is really just a wrapper around Chromium, it allows you to build desktop applications with all tried & tested web technologies. This makes it a natural choice when your team is building web applications usually and can therefore transition seamlessly to an Electron project.
However, Electron feels sometimes a bit like building on sand - it works really well, but there are a lot of moving parts under the hood. One obstacle we've encountered was around testing the full application in an end-to-end (E2E) way. To ensure such tests run in a controlled environment, independent of whatever host is currently executing, we're using a small Docker setup to spin up the entire infrastructure, test it and then tear it down again. That works pretty well with regular web apps, but not so much with Docker.
When we first started setting this up, we've ran into lots of problems. The biggest maybe was a lack of community information around this issue - it just seemed like not a lot of people are executing such tests, so there was not a lot to be found except small bits here and there. We've finally managed to get it up in the end, but even an external consultant specializing in that area was struggling to set it up. One of the major hurdles was that while there are lots of tools for automating browsers that work fine in Docker, there are not a lot of options when it comes to Electron.
Then came last week, when we updated the Electron version. That immediately broke our build. It turned out that Spectron, the test runner for Electron, doesn't have any active maintainers left. And it also looked like we're not the only ones affected by this.
Luckily, there's a new player around, trying to offer a modern and stable way for browser automation: Playwright by Microsoft. It does have official Electron support, is actively maintained and, due to it being just a bit over a year old, was able to use a modern and easy to use API approach.
After spending some hours on trying to get the original test setup fixed, we've decided to give Playwright a try. To our amazement, it just worked! It took just about an hour to set everything up and migrate the tests. And, our E2E tests started being green again🚀
I'll try to give you a condensed summary of what we did to get it running, and how it was set up. In reality, we're also spinning up the other services, like backend, database and blob storage, put all the Docker containers in the same network and therefore simulate the app as close to the production environment as possible. But, let's take a look at it file by file:
We start with a Dockerfile. That's a rather simple one, the few things worth mentioning are:
- We're building from the node image, since a lot of dependencies are already present there
- Some tools need to be installed around display drivers, most importantly xvfb which will act as a virtual display inside the Docker container
- A custom entrypoint for the container is provided to execute a script at container start
The entrypoint itself isn't complicated, either. When looking around for running xvfb in Docker, you'll often find similar snippets. Ours was, in fact, also mostly copied from various sources. Its really just making sure that a virtual display is available before the actual command is run
The common-setup.ts file is a base for all our tests. Here, we're using playwright and launch a new instance of the app for every test. As you can see, the API could probably not be simpler here, yet it works flawlessly.
Finally, some tests. They're shortened, but give you a small indication what's possible with playwright, or automated E2E tests in general.
We do usually aim for two things with such tests:
- We want to get quick feedback to see if the app is generally running, is it able to start, do we get any script errors in the console, can users log in and such
- Additionally, we usually have full E2E tests for the critical paths in our applications. For example, can new users register, confirm their email, login, start a trial and then upgrade to a paid subscription? Generally, things a QA department would do but that are easy to automate
It's sometimes tedious to write such tests, and you most likely won't ever have your full app covered with them. But they're a huge confidence boost, especially when your pipeline automatically deploys right into production.
Finally, to actually run the tests, we're just spinning up the Docker container with a script like the one above. The important part here is that we're now calling docker run to execute end to end tests in the container. We're doing some more optimizations, like using tmpfs for the node_modules folder and mounting the app directory into the container. But in the end, the tests run and we're getting a test results file out that can be processed in CI systems.
So, happy testing!