One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
It's humbling how well-rounded Brian (and other Youtubers such as Applied Science and StuffMadeHere, HuygensOptics) is on top of clearly being a skillful physicist: electronics, coding, manufacturing, ... and the guy is _young_ compared to the seasoned professionals I mentioned in the parentheses.
If he randomized the position with blue noise or something he could use compressed sensing or ai denoising for many less samples. The raw image wouldn't be as good but by the time it is compressed it should much better for the same sample count. It might not be as easy to move the mirror in both axes between each sample though.
edit: saw below he is using a continuous scan so randomizing it probably wouldn't be workable
In one of the appendix videos he mentions that would improve the noise, the issue is the data rate exporting from the scope is a bottleneck and it would slow things down even more.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.