I think you’re looking at the truly pathological case where the entire value returned by the hash function is identical for all keys. I’m thinking of the more realistic case where it’s identical modulo the number of buckets (for a small number of buckets). In that case, rehashing is likely to help (since if you double the number of buckets, you can use one extra bit of the hash value). In this scenario I think Go will rehash whereas Rust won’t. At the very least, Go in principle has the option of rehashing in this scenario, as it makes no guarantee regarding allocations.
Preallocating makes sense, yes, but you can successfully preallocate in Go most of the time. Scenarios where you need guaranteed preallocation strike me as rare. At least, it’s not something I’ve ever found myself worrying about.
If it's happening because we genuinely got unlucky the strategy you describe is no more likely to help than anything else, whereas if it's happening because your hash function is rubbish your strategy works, but using a hash function which isn't rubbish also works for the good hash table designs and unlocks far better performance in the process.
Preallocating makes sense, yes, but you can successfully preallocate in Go most of the time. Scenarios where you need guaranteed preallocation strike me as rare. At least, it’s not something I’ve ever found myself worrying about.