Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the example ("let's color each pixel ...") the layout is:

  R G
  G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".


And this is important because our perception is more sensitive to luminance changes than color, and with our eyes being most sensitive to green, luminance is also. So, higher perceived spatial resolution by using more green [1]. This is also why JPG has lower resolution red and green channels, and why modern OLED usually use a pentile display, with only green being at full resolutio [2].

[1] https://en.wikipedia.org/wiki/Bayer_filter#Explanation

[2] https://en.wikipedia.org/wiki/PenTile_matrix_family


Funny that subpixels and camera sensors aren't using the same layouts.


It would only be relevant if viewing the image at one-to-one resolution, which is extremely rare.

Pentile displays are acceptable for photos and videos, but look really horrible displaying text and fine detail --- which looks almost like what you'd see on an old triad-shadow-mask colour CRT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: