Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because the specks of grain aren't at the exact same coordinates? What differences are we talking about here exactly?


The differences are actual film grain vs some atrocious RGB noise artificially added by the streamer. How is that unclear? What else could we be talking about?


Right, the current implementation is bad.

In theory though, I don't see any reason why client-side grain that looks identical to the real thing shouldn't be achievable, with massive bandwidth savings in the process.

It won't be, like, pixel-for-pixel identical, but that was why I said no director is placing individual grain specks anyway.


> with massive bandwidth savings in the process

Let's be clear. The alternative isn't "higher bandwidth" it's "aggressive denoising during stream encode". If the studio is adding grain in post then describing that as a set of parameters will result in a higher quality experience for the vast majority of those viewing it in this day and age.


If the original is an actual production shot on film, the film grain is naturally part of it. Removing it never looks good. If it is something shot on a digital camera and had grain added in post, then you can go back to before the grain was added and then do it client side without degradation. But you can never have identical when it originated on film. That's like saying you can take someone's freckles away and put them back in post just rearranged and call it the same.


Sorry, I am talking about the case where grain was added in post. You originally said, and I quoted:

> If a director/producer wants film grain added to their digital content, that's where it should be done in post.

To me, this philosophy seems like a patent waste of bandwidth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: