Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you take a look at the xyY diagram:

> https://ciechanow.ski/images/color_gamut_srgb.svg

the gamut is defined by the triangle connecting the RGB primaries. If you could add a primary at a more pure green, you'd get a bigger triangle and a wider gamut, but still not the entirety of the chromaticity space. If you start adding other components that lie outside that triangle, you start expanding the polygon, but there will always be parts of the diagram it doesn't cover.

In order to encompass the entire chromaticity space, you'd have to have a large (technically infinite) number of components. You can never have a component outside the diagram, because the curved hull is the pure chromaticity of light at each wavelength; light outside the curve doesn't exist, and light below the straight "line of purples" is invisible (UV/IR).

Theoretically you could get much more coverage by adding more components as new types of emitters are developed. This is happening in high-end theatrical lighting, where you can cram a ton of different LEDs into a fixture, but it's not (yet) practical to cram them all into an on-screen pixel.

Things get even more mind-bending when you start illuminating objects with light instead of looking at the light directly, because then there's a benefit to adding components even if they don't expand the gamut. Once you have four or more components, you can make many colors an infinite number of ways (metamers), and they might look absolutely indistinguishable when lighting a white wall, but they'll make skin tones look completely different if you made them by combining just RGB versus, say, amber and blue. That's not really related to your question, but I've just started delving into this area in my day job, and it's exciting and trippy. :)



> it's not (yet) practical to cram them all into an on-screen pixel

At the pixel density (>500,000 subpixels per square inch) of current flagship smartphones, I think it would actually work just fine to have 6 or 9 (or whatever) different primaries, make a grid of little pixels in some scattered but regular arrangement, and then use clever digital signal processing to figure out how to convert an RGB image into a RR'GG'BB' (or whatever) image. I would expect it to look visually identical at typical viewing distances for existing images, while potentially allowing someone with low-level hardware access to make a wider gamut / choose their preferred metamer / etc.

I would expect it to be entirely achievable using current technology and not inordinately much more expensive than current displays.

But the engineering effort and additional complexity might not provide enough benefit for display vendors to invest in that, vs. just continuing to optimize their current display concepts. Or they might not even be considering radical changes in strategy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: