racing · software · open-source

Refactoring CSS#

Published on April 8, 2018 · 448 words · about 2 min reading time

One shortcoming of this permanently-work-in-progress blog of mine was the rendering on mobile devices. The experience of browsing the blog on a phone or tablet was less than ideal: text touching the borders of the screen, images overflowing the main section, social media links being out of place, and many more. It was a long standing issue. So I set off on fixing this. While digging straight into the first CSS changes and fiddling in the developer console of Chrome, I remembered what I wrote in the Snapshot TDD post:

Things of visual nature are not unit-tested easily, which is why they are often simply untested. We usually don't test stylesheets, colors, images etc. However we can't say those things are unimportant.

Unhappy about the workflow I just started and taking into account the above thought, I typed some words into google and emerged with this awesome tool: BackstopJS. It's headlined with "Visual regression testing for web apps" and was exactly what I was looking for. It provides a safety net for changing the visual nature of something rendered in a browser by taking and comparing screenshots. Basically exactly what I manually and crudely setup for the e-ink dashboard, just so much more awesome.

Setup and usage

Both setting up and using BackstopJS is dead simple and can be explained in this short snippet:

You will get a really nice page telling you that the tests were failing. Why? Because you have not approved any reference images yet. Once you look at those images and assert that this is currently the way things are looking, go ahead and backstop approve those. Now you are golden and able to make changes to your pages CSS without having to fear to a) break thinks if you are refactoring, or b) having a super quick way to get an overview of the changes you made across multiple pages of your blog in different viewports.


With my page being a blog, the content obviously will continuously change, making the comparison of the current look against reference images unfeasable. Luckily this is an easy fix: I created a new database containing exactly one sample post, which uses all the usual tags like h1-h6, lists, blockquotes to showcase most of the styles coming into play.

This is how the current reference image looks in tablet mode:

Tablet reference image for sample post


As always, you can follow along to the PR on github. One example of a change to the reference image can be seen here. I must say I am mega impressed with BackstopJS and will try to use that again in the future. One open task is to get that running on CircleCI as well.

Snapshot TDD#

Published on February 12, 2018 · 657 words · about 3 min reading time

One of my recent weekend side projects, an e-ink / raspberrypi driven build status dashboard, was a great playground for doing TDD powered by visual snapshots. But let's rewind a bit.


What I actually wanted to achieve was the following: Build a semi-decent python class to draw a dashboard type interface, which I can feed to my e-ink display. I had already prototyped such a script, but it was a "make it work in the quickest possible way in 1 hour" mess. Nothing I wanted to maintain or even look at for even five more minutes. I also didn't want to start completely from scratch regarding the output, because I was happy enough with the result this script produced, which is shown here:


So how could I develop the code from scratch, while making sure I got the exact same output in the end? Right, creating myself a feedback loop that will quickly compare the reference image to the current output. To quote from jest:

Snapshot tests are a very useful tool whenever you want to make sure your UI does not change unexpectedly. A typical snapshot test case for a mobile app renders a UI component, takes a screenshot, then compares it to a reference image stored alongside the test.

This is powerful, because how else would I test this? Things of visual nature are not unit-tested easily, which is why they are often simply untested. We usually don't test stylesheets, colors, images etc. However we can't say those things are unimportant. So I set out to do TDD with snapshots and iterate myself toward the reference result.


Based on my prototype I already had a reference image to compare against. But simply putting two images side-by-side is barbaric, and we can do better. I grabbed myself a copy of pixelmatch, a Javascript image comparison library, copied the sample code, and boom, there was an image diff clear as day. With the full result compared against a plain white image, it looks like this.

First diff

Lot of work left to do, sure, but that set myself up for about a two second feedback cycle. The process, which I packaged into a simple npm test bound to <leader>t in VIM so I can invoke it in one keystroke, is this:

  1. Run unit tests in python (This is just one dumb test for the constructor, I should remove it)
  2. Render current image to actual.png in an "integration" test
  3. Create image diff with pixelmatch
  4. Open this diff in Preview so it jumps into my face

See the process encoded here, and yes, the irony of having a node based test invocation for a python script is not lost on me. Computers 🤷🏼‍♂️


Let's walk through one of my commits together. I really enjoyed working like this. A few minutes in, I had the rendering of the header, header title, project text to the left all fleshed out with minimal differences to the reference. I assume something regarding the font-rendering on the raspberrypi/debian vs. my mac is to be blamed for the tiny deviations around the text. No clue though. So here I was:

Diff dffaea0

Lets add some code to render the badge text on the right:

Hit <leader>t, and see this:

Diff 864d54a

So obviously I got the alignment wrong. Lets fix it:

Re-run the tests, see this:

Diff 93cf85d

Less red! That's basically what I did over and over again. Feel free to have a look at the commits for more examples.

My takeaways

Danger-todoist celebrates 200k downloads#

Published on January 17, 2018 · 251 words · about 1 min reading time

Danger-todoist is a plugin for the excellent Danger ecosystem. More specifically for the ruby variant of Danger. What does Danger do? It is basically a kind of automated code / pull request review system. You create a pull request on github, and a bot account will recommend changes to the pull request. The changes it suggests are based on a freely configurable set of rules and suggestions, as codified per your Dangerfile. The beauty comes as always through the flexibility of just plain ruby code and a set of plugins.

One of those plugins is danger-todoist, which I first published in September 2016. A things that makes me cringe is leaving TODO: fix me comments all over our code, and of course then never fixing them. Makes one wonder if there really was something to do ... 🤓.

Danger-todoist helps you with this! It will duly notify you if you were to leave an unadressed todo comment in your changes. You can decide whether this is a show stopper (YES!) or if you want to leave it as a warning. Either way, this makes it much harder to let many of those pesky comments sneak into your codebase.

Since its first release more than a year ago, it has now amassed 200.000 downloads as shown on rubygems. This likely makes it my most successful peace of open-source software to date 🖖🏽, which I hereby celebrate.

Hack on, and keep that code clean, and check it out on github.

Build Status Gem Version Code Climate Test Coverage

My 2017 podcast winners#

Published on December 22, 2017 · 342 words · about 1 min reading time

In 2017 I have likely listened to hundreds of hours of podcasts. Out of interest, lets do the math real quick: 50 weeks until now * 5 podcasts * 1h average length = 250h. So yeah, hundreds of hours it is. But I definitely don't consider that to be wasted time, but sometimes great entertainment, time spent learning, or a soothing tone to fall asleep to. Without much further ado, here's what I have been listening to in 2017, in no particular order:

Living off of open-source#

Published on December 19, 2017 · 193 words · less than a minute reading time

For the second year in a row now I have participated in Hacktoberfest, an open-source initiative by DigitalOcean, a cloud infrastructure provider. What's the deal? You, fellow open-source contributor, just have to open a handful of pull-request during the timeframe of October 1st to 31st. DigitalOcean will be generous and send you a limited edition t-shirt for free (well, for your time spent on those 5 pull-requests that is). Here's the two shirts I got for my 2016 and 2017 efforts:

Needless to say, I find that an awesome initiative, seeing that the world builds upon open-source software. The five pull-requests that got me my t-shirt this year were:

To finish this off I can't recommend participating in Hacktoberfest enough, and thanks to DigitalOcean to appreciate this by giving you a t-shirt.

Next page »