Ditherpunk 2 — beyond 1-bit

Feb 12th, 2021
by makeworld

Last updated: Apr. 30, 2021

This post contains some images that need to be viewed at 100% size to be seen correctly. If you normally browse at a higher zoom level than 100%, please zoom out when you get to any of the images.

Just yesterday, I released my dithering library, written in Go. It’s the product of many hours of research, experimentation, and refactoring. I’m excited that it’s finally out, and to see what people create with it. Personally I’d like to create a CLI dithering tool at some point that uses it.

Creating this library required research into the different algorithms for dithering, and how to implement them. Unfortunately, not all of that knowledge is easily found. It’s spread across blog posts, Wikipedia pages without citations, old books, and code comments. Recently, Surma wrote an article called Ditherpunk — The article I wish I had about monochrome image dithering. It is an invaluable resource that combines lots of knowledge about dithering into one place. The main thing missing from the post, however, is going beyond “monochrome”. Surma shows us how to dither with a 1-bit palette of black and white, but what about more shades of gray? What about colour images? With this blog post I’d like to explore those techniques, taking them out of my code and into English, so you can easily apply them elsewhere.

First off, please read Surma’s post. It explains what dithering is, how it works, and many different algorithms. I don’t feel the need to explain these again, but merely add on what I’ve learned.

Before we start

An HN commenter informed me that you cannot accurately represent linear RGB with just 8-bits (0-255), you need at least 12. Because of this, I have updated the blog post to use 16-bit color (0-65535), and will be updating the library shortly. Make sure you do this in your code too!


Dithering operations consist of at least two steps, applied to each pixel. Sometimes there are further steps, like “modify these nearby pixels”, but these are the basic ones.

  1. Modify the pixel’s colour value.
  2. Find the colour in the palette that is “closest” to that modified value and use that on the output image.

Now, we know from Surma’s post that step 1 must be done with linear RGB values. Otherwise, different values will be affected differently – for example adding 5 to each colour won’t affect all colours the same way.

But what about step 2? How do we find the closest colour in the palette? In 1-bit dithering we don’t have to worry about this, because anything above 0.5 is white, and anything below is black. But when your palette colours can be anyhere in a 3D space, it is something that needs to be figured out.

Perhaps surprisingly, we’re not looking for a way that matches human perception. In fact, we are using Euclidean distance with linear RGB values, which doesn’t match human perception at all! Why is this? Thomas Mansencal (@KelSolaar) explains it best:

You can factor out the observer [the human] because what you are interested in here is basically energy conservation. The idea being that for a given pool of radiant power emitters, if you remove a certain number of them, by how much must the radiant power of the remaining ones be increased to be the same as that of the full pool. It is really a ratio and doing those operations in a linear space is totally appropriate!

This helped it click for me. Dithering can be thought of as trying to reconstruct the “radiant power” of the original pixel colours, while restricted to a certain set of “emitters”, aka the palette colours. It is only with linearized RGB values that we can properly measure the radiant power.

Random Noise (grayscale)

Random noise is a good one to start with, to show the differences between 1-bit dithering and larger palettes.

Surma does this by basically just adding a random number from -0.5 to 0.5, and then thresholding it.

grayscaleImage.mapSelf(brightness =>
  brightness + (Math.random() - 0.5) > 0.5 
    ? 1.0 
    : 0.0

In my library, there are two separate random noise functions. One is for grayscale, and one is for RGB. The grayscale one looks like this:

roundClamp(float32(gray) + 65535.0*(rand.Float32()*(max-min)+min))

The math with min and max is just to put the random value in the desired range. If we clean that up it’s more understandable:

roundClamp(float32(gray) + 65535.0*rand())

rand() just represents a function that does whatever range we want, -0.5 to 0.5 in this case.

So, obviously this is quite similar to what Surma does. The heavy lifing of the dithering in this case is done by the other code, the part that finds the best palette colour.

The main thing that’s worth noting here is how the rounding works. roundClamp rounds the value to an integer, and then clamps it to the range 0-65535. But how is the rounding done?

Another HN commenter shared this link which discusses different rounding methods, and the problems with the way rounding is often done. The best solution is to use a rounding function that does this:

Given a number exactly halfway between two values, round to the even value (zero is considered even here).

round( 1.7 ) –> 2 round( 2.7 ) –> 3
round( 1.5 ) –> 2 round( 2.5 ) –> 2
round( 1.3 ) –> 1 round( 2.3 ) –> 2

This is not really about dithering, but this is a pretty important point to get things right mathematically. Make sure you do it! Otherwise your outputs will be biased.

Random Noise (RGB)

Back to random noise, but for colour this time.

r = roundClamp(float32(r) + 65535.0*(rand.Float32()*(maxR-minR)+minR))
g = roundClamp(float32(g) + 65535.0*(rand.Float32()*(maxG-minG)+minG))
b = roundClamp(float32(b) + 65535.0*(rand.Float32()*(maxB-minB)+minB))

And simplified:

r = roundClamp(float32(r) + 65535.0*randR())
g = roundClamp(float32(g) + 65535.0*randG())
b = roundClamp(float32(b) + 65535.0*randB())

Pretty simple, it just adds randomness in each channel. This can be done with grayscale images too, but it won’t work very well, because grayscale colours only actually have one dimension. So it will just not add as much randomness as you would expect.

Also note that usually you’ll want the random ranges to be the same for R, G, and B.

Here’s an example:

Ordered Dithering

Modifying threshold matrices

Most of the resources online that talk about ordered dithering talk about a “threshold matrix”. “Thresholding” is how these matrices are applied for 1-bit dithering. You divide the matrix by whatever number is specified, scale it to the colour value range, and compare it to each pixel in the image. If it’s less than the matrix value, make it black, otherwise white. Obviously this doesn’t work with any other kind of palette. So what’s the solution?

Wikipedia offers an answer. Unfortunately there’s no citation, but I’ve confirmed independently that it works. Here it is with some of my own math added as well.

(65535×strength)×(cell+1max0.5)(65535 \times strength) \times \left( \frac{cell + 1}{max} - 0.5 \right)

cellcell is a single cell of the matrix.

strengthstrength is a percentage, usually a float from 00 to 1.01.0. It’s the amount the matrix will be applied to the image. The closer to zero it is, the smaller the range of input colors that will be dithered. Colors outside that range will be quantized. Usually you’ll want a strength of 1.01.0, to apply the matrix and dither fully, but sometimes reducing it can be useful, to reduce noise in the output image. It is inversely proportional to contrast – that is, when you reduce the strengthstrength, it is visually similar to increasing the contrast.

strengthstrength can also be negative, from 00 to 1.0-1.0. This is useful in certain cases where the matrix usually makes things bright, like what Surma describes with Bayer matrices.

Note that 6553565535 is multiplied by strengthstrength because the colours in my code are in the range [0,65535][0, 65535]. If yours are different you can change that number.

maxmax is the value the entire matrix is divided by. It represents the maximum value of the matrix, and normalizes it by dividing. Usually this is the product of the dimensions of the matrix. It can also be the largest value in the matrix plus one.

The result of applying this operation to each cell of the matrix is a new, precalculated matrix, which can be added to a pixel’s colour value for dithering. Adding 0.5 does not need to happen in this case. In my library, I call the function that does this convThresholdToAddition, because that’s essentially the purpose of this – converting a threshold matrix into one that can be used for addition.

Note: This is designed for matrices that range from 00 to max1max - 1. If you’re using a matrix you found that starts at 11, you’ll usually just want to subtract one from each value when you program it, then apply the operation I described above.

Using a modified matrix

Now that you’ve modified the matrix so it can be used for addition, it needs to be applied to the image. This is pretty simple. Use modulus so the matrix values are tiled across the image, and add the same value in the R, G, and B channels. If you’re using a grayscale image you can just apply it in the one channel, or still use RGB. Since the same value is being added it doesn’t make much of a difference. Like always, make sure to clamp the values to the proper colour range.

Doing all of this definitely works with colour images, but it’s not the greatest. Here’s an example, where the palette is red, green, and black.

Here it is where the palette is red, green, black, and yellow.

As you can see, it doesn’t really emulate any of the yellow in the first example, while Floyd-Steinberg can. Once yellow is added to the palette it looks pretty good though.

Error diffusion dithering

Error diffusion dithering is done pretty much the same way as Surma described. The main difference is that the quantization error is measured for R, G, and B, instead of a single gray value. The final bit of (pseudo) code looks like this:

roundClamp(r + er * matrix[y][x])
roundClamp(g + eg * matrix[y][x])
roundClamp(b + eb * matrix[y][x])

The variables starting with e represent the current quantization error.


This is unrelated to color dithering, but it’s something my library does that Surma didn’t mention, and something I couldn’t really find information on online – serpentine dithering.

This is when the horizontal direction of error diffusion dithering (usually left-to-right) reverses for every other line, going right-to-left. This is simple to implement, and improves results by removing line artifacts.

The one thing that was hard to find any info on was how you need to reflect the error diffusion matrix horizontally every time you go backwards as well. Otherwise you modify pixels to your right that have already been finalized.

Here’s an example of the difference serpentine dithering makes, using the simple 2D error diffusion matrix, as it has the worst artifacts.

matrix:(0.50.50)matrix: \left(\begin{array}{cc} * & 0.5 \\ 0.5 & 0 \\ \end{array} \right)

In my experience, it makes sense to always do serpentine dithering for the best results.


In the ordered dithering section I talked about how a strengthstrength variable could be used to control how much dithering was applied. This can also be done for error diffusion, and it appears some graphical suites like ImageMagick even have this built-in, as is described here.

Here is the modified pseudocode for making use of a strength value:

roundClamp(r + er * strength * matrix[y][x])
roundClamp(g + eg * strength * matrix[y][x])
roundClamp(b + eb * strength * matrix[y][x])

And here’s an example of different strength values with Floyd-Steinberg dithering, with a palette of red, green, yellow, and black.

As you can see, the strength variable is pretty much inversely proportional with contrast. A value like 0.80.8 might be good to reduce some noise in the image, but go too low and the colours will just be inaccurate. Sticking to 1.01.0 is a reasonable default.

However this does not apply to Bayer dithering of color images. As Surma mentions, Bayer matrices are fundamentally biased to making images brighter. For color images, the bias occurs in all three RGB channels, and can distort image colors overall, making the final dithered image look inaccurate. Several sites recommend a value like 0.640.64 (256/4), and in my experience it’s a good default for color image dithering, with Bayer as well as ordered dithering in general.

Other things

This post is pretty long now, but I do have a few other things to share.

One is about clustered-dot dithering. It looks like this:

Currently my library just does clustered-dot dithering with some preprogrammed matrices I’ve found around the Internet and in a couple books. I was unable to find any proper instructions on how to generate a clustered-dot dither matrix, so that they can be created on the fly, and in any size. The books I read hinted that it was possible, but I couldn’t find anything. If you know how to do this, please contact me! I will update my library and this post if I figure anything out.

The other thing is that I used Joel Yuliluoma’s per-cell algorithm for creating Bayer matrices, as written here (the second code block). Since I had to rewrite it in Go, it’s cleaner to read and understand than the original C. If you’re interested in implementing this yourself, you might prefer reading it. My implementation is here. It’s been tested to make sure it works the exact same as Yuliluoma’s.


Thanks for reading! I hope this is helpful to anyone else who is interested in dithering. And if you know Go, check out my library! Make something cool with it. 😄

Looks like this is getting some traction on Hacker News! Feel free to comment there.