> I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
And it will do so confidently and incorrectly. A single description of symptoms from a patient is very unlikely to be enough. This is why doctors are there to ask follow-up questions and do examinations. Symptoms alone can describe a dozen different illnesses.
I read a while back that bzip2 is named that way because the original bzip used arithmetic coding. The person who made bzip then made bzip2 to use Huffman coding due to patent problems with arithmetic coding.
> I know this position is wrong, but it feels hard to spend my time on something that someone else might not have spent the time to create
I don't think that position is wrong. I felt similarly when tutoring a high-school student recently. They didn't do any work themselves, they were forced by their parent to come to me two days before a test. I offered to help, but when I realized that the student didn't care enough to study by themselves, I basically lost all motivation to help them.
It feels the same with AI-generated content. If the "creator" didn't care to spend time on it, why should I spend mine?
ISO C99 actually defines multiple types of deviating behaviour. What you're describing is closer to implementation-defined behaviour than anything else.
The three behaviours relevant in this discussion, from section 3.4:
3.4.1 implementation-defined behavior
unspecified behavior where each implementation documents how the choice is made
EXAMPLE An example of implementation-defined behavior is the propagation of the high-order bit when a signed integer is shifted right.
3.4.3 undefined behavior
behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements
Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).
An example of undefined behavior is the behavior on integer overflow.
3.4.4 unspecified behavior
behavior where this International Standard provides two or more possibilities and imposes no further requirements on which is chosen in any instance
An example of unspecified behavior is the order in which the arguments to a function are evaluated.
K&R seems to also mention "undefined" and "implementation-defined" behaviour on several occasions. It doesn't specify what is meant by undefined behaviour, but it does indeed seem to be "whatever happens, happens" instead of "you can do whatever you want." But ISO C99 seems to be a lot looser with their definition.
Using integer overflow, as in your example, for optimization has been shown to be beneficial by Charles Carruth in a talk he did at CppCon in 2016.[1] I think it would be best to have something similar to Zig's wrapping and saturating addition operators instead, but for that I think it is better to just use Zig (which I personally am very willing to do once they reach 1.0 and other compiler implementations are available).[2]
[1] is probably the best counterpoint I've seen, but there are other ways to enable this optimization - the most obvious being to use a register-sized index, which is what's passed to the function anyways. I'd be fine with an intrinsic for this as well (I don't think you'll use it often enough to justify the +%= syntax)
It's also worth noting that even with the current very liberal handling of UB, the actual code sample in [1] was still lacking this optimization; so it's not like the liberal UB handling automatically lead to faster code, understanding of the compiler was still needed.
The question is one of risk - if the compiler is conservative, you're risking is a slightly more unoptimized code. If the compiler is very liberal and assumes UB never happens, you're risking that it will wipe your overflow check like in my godbolt (I've seen an actual CVEs due to that, although I don't remember the project)
Recompiling the dependencies should only really happen if you change the file with the implementation include (usually done by defining <library>_IMPLEMENTATION or something like that.
Im very familiar with Nix or the language, but why would interpreting guile scheme for package management be expensive? What are guix and nix doing that would require evaluating everything lazily for good enough performance?
I checked the spec and Scheme R5RS does have lazy evaluation in the form of promises using "delay" and "force", but I can see why explicitly having to put those everywhere isn't a good solution.
I made a table because I wanted to test more files, but almost all PDFs I downloaded/had stored locally were already compressed and I couldn't quickly find a way to decompress them.
Brotli seemed to have a very slight edge over zstd, even on the larger pdf, which I did not expect.
EDIT: Something weird is going on here. When compressing zstd in parallel it produces the garbage results seen here, but when compressing on a single core, it produces result competitive with Brotli (37M). See: https://news.ycombinator.com/item?id=46723158
Turns out that these numbers are caused by APFS weirdness. I used 'du' to get them which reports the size on disk, which is weirdly bloated for some reason when compressing in parallel. I should've used 'du -A', which reports the apparent size.
Here's a table with the correct sizes, reported by 'du -A' (which shows the apparent size):
I wasn't sure. I just went in with the (probably faulty) assumption that if it compresses to less than 90% of the original size that it had enough "non-randomness" to compare compression performance.
Ran the tests again with some more files, this time decompressing the pdf in advance. I picked some widely available PDFs to make the experiment reproducable.
For learnopengl.pdf I also tested the decompression performance, since it is such a large file, and got the following (less surprising) results using 'perf stat -r 5':
The conclusion seems to be consistent with what brotli's authors have said: brotli achieves slightly better compression, at the cost of a little over half the decompression speed.
maybe that was too strongly worded but there was an expectation for zstd to outperform. So the fact it didnt means the result was unexpected. i generally find it helpful to understand why something performs better than expected.
Isn't zstd primarily designed to provide decent compression ratios at amazing speeds? The reason it's exciting is mainly that you can add compression to places where it didn't necessarily make sense before because it's almost free in terms of CPU and memory consumption. I don't think it has ever had a stated goal of beating compression ratio focused algorithms like brotli on compression ratio.
I actually thought zstd was supposed to be better than Brotli in most cases, but a bit of searching reveals you're right... Brotli, especially at the highest compression levels (10/11), often exceeds zstd at the highest compression levels (20-22). Both are very slow at those levels, although perfectly suitable for "compress once, decompress many" applications which the PDF spec is obviously one of them.
And it will do so confidently and incorrectly. A single description of symptoms from a patient is very unlikely to be enough. This is why doctors are there to ask follow-up questions and do examinations. Symptoms alone can describe a dozen different illnesses.
reply