> In the worst case - which is incredibly unlikely with a good hash, and you'll notice Rust supplies a good hash by default - we'll insert 623 items which have the same hash, and so they'll prefer to ideally be in the same bucket, but they're instead taking up 623 contiguous buckets and all our searches are now linear, so our performance is poor with average probe length 312.
Yeah, this is the nub of the issue. You’re avoiding allocation by giving up amortized O(k) lookup. So there’s a trade off here. Roughly, you have a choice between getting slow in unlikely scenarios or allocating in unlikely scenarios. In typical Go code, neither option is obviously better or worse in general. In the context of Rust, it makes sense to avoid allocations.
Preallocating hashtables of fixed size seems like a rare use case to me, though I don’t doubt that it might come up from time to time.
> You’re avoiding allocation by giving up amortized O(k) lookup
What's "k" here ?
Here's an experiment, I know how to write it in Rust, but not in Go, so let's make it a thought experiment. We have 623 custom Doodads which we've arranged (this is the part I know how to do in Rust but not Go) to have identical hashes despite being different. This clearly could happen (though it's fantastically unlikely) for real data but we're doing it artificially.
With Swiss Tables, and several other techniques that come to mind, we end up with 623 contiguous entries, and a 312 average probe length which means our performance sucks, but we didn't allocate beyond the original 1024 single item "buckets".
What happens with Go's approach? I think it ends up with all buckets empty, except one, that one is full and has 77 overflow buckets connected. In this case we've also got 312 average probe length and the performance sucks even worse. We have to allocate those 77 overflow buckets, and if we make the mistake of trying to re-size we allocate for that and improve nothing at all.
EtA: Oh, and pre-allocating makes sense when we know up front how many things will go in the hash table, which happens really often when we're building it once, using it, and then throwing it away. It will usually make sense to pre-allocate even if the table will latterly grow, because we can skip all the little sizes that we'd otherwise have to grow past.
I think you’re looking at the truly pathological case where the entire value returned by the hash function is identical for all keys. I’m thinking of the more realistic case where it’s identical modulo the number of buckets (for a small number of buckets). In that case, rehashing is likely to help (since if you double the number of buckets, you can use one extra bit of the hash value). In this scenario I think Go will rehash whereas Rust won’t. At the very least, Go in principle has the option of rehashing in this scenario, as it makes no guarantee regarding allocations.
Preallocating makes sense, yes, but you can successfully preallocate in Go most of the time. Scenarios where you need guaranteed preallocation strike me as rare. At least, it’s not something I’ve ever found myself worrying about.
If it's happening because we genuinely got unlucky the strategy you describe is no more likely to help than anything else, whereas if it's happening because your hash function is rubbish your strategy works, but using a hash function which isn't rubbish also works for the good hash table designs and unlocks far better performance in the process.
Yeah, this is the nub of the issue. You’re avoiding allocation by giving up amortized O(k) lookup. So there’s a trade off here. Roughly, you have a choice between getting slow in unlikely scenarios or allocating in unlikely scenarios. In typical Go code, neither option is obviously better or worse in general. In the context of Rust, it makes sense to avoid allocations.
Preallocating hashtables of fixed size seems like a rare use case to me, though I don’t doubt that it might come up from time to time.