Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a copy of "Digital Halftoning" from Robert Ulichney at home, if it's anywhere it would be there. I'll try to remember to look at that tonight.

https://mitpress.mit.edu/books/digital-halftoning



Thank you! If it's not in there then that narrows it down to innovations since 1987 ;)


Section 8.3 on page 265 has the following:

In 1983, Billotet-Hoffman and Bryngdahl [11] suggested using an ordered dither threshold array in place of the fixed threshold used in error diffusion. However, the resulting halftoned output differs little from conventional ordered dither.

Later in the chapter they explore the idea of using a random threshold instead of an ordered pattern.

[11] Billotet-Hoffman, C. and O. Bryngdahl (1983) "On the error diffusion technique for electronic halftoning", Proc. SID, vol. 24, pp. 253-258


Great, that's a perfect start for Google Scholar! Now I can see which papers cited it, and so on.

I disagree with their conclusions though, look at this crosshatching dither that emerges from totally different parents:

https://mobile.twitter.com/JobvdZwan/status/1201613029013671...

The two-bit version looks pretty neat with it's pseudo-crosshatching effect, no?

Although I have to add that I had to rework the error-diffusion implementation to minimize rounding error. Before I did that the results weren't so great. So if we make the relatvely safe assumption that they used a "classic" eight-bit in-place optimized implementation , it might be true that it did not produce good results. And I'm sure that in their day they did not have the computer resources to quickly experiment with different combinations of dithering and ordering like I did.

Anyway, thank you so much for taking the time to look this up!


I'm not sure what you mean by rounding error. The whole point of error diffusion is that any errors are propagated so that they average out. I've coded Floyd-Steinberg many times myself, and the only problem I ever had was when my outputs didn't include 0% and 100% leaving the error to build without correction.


I basically mean clipping the diffused error - for example if I propagate a positive error to a pixel that is already white, and don't propagate that error again to the next pixels.

In the linked notebook, I use a separate error array made out of Int32 values (which is probably overkill, Int16 would suffice) and postpone division until the last moment. It's a small detail but it makes a significant difference in the hybrid output.


Error is error, it would never have occurred to me to clip it. I can see how that would make a big difference.


Me neither, but it doesn't even have to be a conscious choice! Let's you want to implement an in-place algorithm to save memory (which I'm presume was a realistic trade-off to consider back in the early eighties), and your inputs are single bytes for each channel, then your only option is to do saturating arithmetic. Clipping just falls out of that as a side-effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: