This is cool visualization but the percentages for 140 bits aren't really telling you much. If you need to use this in production, check the original papers (there was an improved analysis recently [0]) and test your own production loads (another point is that with cuckoo filters you can greatly reduce false-positives, if your application doesn't tolerate them well by increasing key size).
Also I notice this comparison isn't using easily comparable implementations -- e.g. this bloom filter does not support delete, but bloom filter implementation which support delete operations do exist.
Also the "Insertions may be rejected" thing would be a deal-breaker for a lot of applications.
If you can't guarantee that an insert will be successful, then you have to be willing to either tolerate false positives and false negatives, or throw away the cuckoo filter and re-create it from scratch.
Yeah, absolutely. You get the same "feature" from Counting Blooms - i.e. if an insertion would overflow any of the counters, the filter can knowingly reject the insertion.
This is both a feature and a limitation. In our use case, our filter grows predictably, and every few months a rejected insertion triggers a resize/rebuild from source data. Because we could tolerate a full rebuild, we chose for now not to implement the technique of growing the filter by adding successively larger filter segments with lower fpp guarantees, though that is an option.
Regarding the second point on deletions, the cool thing about the work by Bin Fan with the Cuckoo filter is that you get the added benefit of deletion in almost the exact memory footprint as a standard Bloom. This was the critical feature that caused us to implement the structure for our use case.
Also I notice this comparison isn't using easily comparable implementations -- e.g. this bloom filter does not support delete, but bloom filter implementation which support delete operations do exist.
[0] https://news.ycombinator.com/item?id=11795779