Hacker Newsnew | past | comments | ask | show | jobs | submit | arch1t3cht's commentslogin

A lot of the pain of GitHub Actions gets much better using tools like action-tmate: https://github.com/mxschmitt/action-tmate

As soon as I need more than two tries to get some workflow working, I set up a tmate session and debug things using a proper remote shell. It doesn't solve all the pain points, but it makes things a lot better.


Tmate is not available anymore, and will be fully decommissioned[0]. Use upterm[1] and action-upterm[2] instead.

Honestly, this should be built into GitHub Actions.

[0] https://github.com/tmate-io/tmate/issues/322

[1] https://upterm.dev/

[2] https://github.com/marketplace/actions/debug-with-ssh


The core idea of YCbCr - decoupling chrominance and luminance information - definitely has merit, but the fact that we are still using YCbCr specifically is definitely for historical reasons. BT.601 comes directly from analog television. If you want to truly decouple chrominance from luminance, there are better color spaces (opponent color spaces or ICtCp, depending on your use case) you could choose.

Similarly, chroma subsampling is motivated by psychovisual aspects, but I truly believe that enforcing it on a format level is just no longer necessary. Modern video encoders are much better at encoding low-frequency content at high resolutions than they used to be, so keeping chroma at full resolution with a lower bitrate would get you very similar quality but give much more freedom to the encoder (not to mention getting rid of all the headaches regarding chroma location and having to up- and downscale chroma whenever needing to process something in RGB).

Regarding the tone of the article, I address that in my top-level comment here.


This may just be because mpv has higher-quality default settings for scaling and tonemapping. Try mpv with profile=fast, maybe. To properly compare mpv's and VLC's performance you'd need to fully match all settings across both players.


It was with the fast profile using both software and hardware deciding, important detail I forgot was that the video was av1. Don't have the link to it now but it was from jellyfin's test files


Thanks for bringing this up, since I'm realizing that I did not explicitly spell this out in the post. I'll add a paragraph making this even clearer.


Thanks for the "encoder/decoder" correction.

But yes, as the other reply says, I am aware of this distinction, and I make a point not to use the word "codec" at any other point in the article. and explain in a lot of detail how much the encoder matters when it comes to encoding in some format. I mention the term to make people aware that it exists.

But, you're right, I will clarify this a bit more.


libav{format,codec,...} are just libraries for demuxing and decoding video. There is huge variability in how those libraries are used, let alone how the video is displayed (which needs scaling, color space conversions, tonemapping, subtitle rendering, handling playback timing, etc. etc.). mpv also has its own demuxer for matroska files, since libavformat's is very limited [1].

[1] https://github.com/mpv-player/mpv/wiki/libavformat-mkv-check...


One difference I can immediately point to is that VLC always renders subtitles at the video's storage resolution and then up/downscales all bitmaps returned by libass individually before blending them. This can create ugly ringing artifacts on text.

I've also seen many reports of it lagging or choking on complex subtitles, though I haven't had the time to investigate that myself yet.

Either way, it's not as simple as "both players use libass." Libass handles the rasterization and layout of subtitles, but players need to handle the color space mangling and blending, and there can be big differences there.


Note that, while I haven't had time to investigate them myself yet, IINA is known to have problems with color spaces (and also uses libmpv, which is quite limited at the moment and does not support mpv's new gpu-next renderer). Nowadays mpv has first-party builds for macOS, which work very well in my opinion, so I'd recommend using those directly.


Original post author here.

It seems like the main criticisms I am getting for this article are because it's escaped past its main target audience, so let me clarify a few things.

This post was born out of me hanging out in communities where people would make their own shortened edits of TV series and, in particular, anime, often to cut out filler or padding. Many people there would make many of the mistakes mentioned in the post, in particular reencoding at every step without knowing how to actually control efficiency/quality. I spent a lot of time helping out individual people one-on-one, but eventually wrote the linked article to collect all of my advice in one place. That way I (or other people I know) can just link to it like "Read the section on containers here," and then answer any follow-up questions, instead of having to explain from scratch each time.

> It seems really weirdly written. / ranty format

So, yes, it does. It was born out of one-to-one explanations on Discord. I wouldn't be surprised if it may seem condescending to a more advanced reader, but if I rant about some point to hammer it down it's because it's a mistake I've seen people make often enough that it has to be reenforced this much. I wouldn't write a professional article this way.

The other point many people seem to get hung up about is the "hate" on VLC. Let me clarify that I do not "hate" VLC at all, I just don't recommend it. VLC is only mentioned once in the entire page, exactly because I didn't want to slot in an intermission purely to list a bunch of VLC issues. I felt like that would qualify more as "hate."

That said, yes, pretty much anyone I know in the fansubbing or encoding community does not recommend VLC because of various assorted issues. The rentry post [1] is often shared to list those, though I don't like how it does not give sources or reproducible examples for the issues it lists. I really do want to go through it and make proper samples and bug reports for all of these issues, I just didn't have the time yet.

Let me also clarify that I have nothing against the VLC developers. VideoLan does great work even outside of VLC, and every interaction I've had with their developers has been great. I just do not recommend the tool.

[1] https://rentry.co/vee-ell-cee


remux vs reencode itself is a big point for video noobs such as myself.

in the past, cropping out a part of a video would meant reencoding it in some random preset. this would often take longer than required. however, accidentally realized the difference when trying out avidemux [1] and clipping together videos blazing fast (provided in same container and format)!

[1] http://fixounet.free.fr/avidemux/


HDR is nothing more than metadata about the color spaces. The way the underlying pixel data is encoded does not change. HDR consists of

1. A larger color space, allowing for more colors (through different color primaries) and a higher brightness range (though a different gamma function)

2. Metadata (either static or per-scene or per-frame) like a scene's peak brightness concrete tonemapping settinsg, which can help players and displays map the video's colors to the set of colors it can display.

I actually have a more advanced but more compact "list of resources" on video stuff in another gist; that has a section on color spaces and HDR:

https://gist.github.com/arch1t3cht/ef5ec3fe0e2e8ae58fcbae903...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: