It's not about number definition issues. The argument is convincing that the difference of upper and lower range bounds should equal the number of contained elements, and that it's ugly to represent empty ranges with an upper bound that is lower than the lower bound. (I agree and I held that opinion before even reading this article).
Note that I rewrote his argument a little here, because it is not really about natural vs unnatural, but more about looking at the difference between upper and lower bound. Now, if I may presume that we can agree that the difference of the bounds should equal the number of elements, leading to left-inclusive and right-exclusive bounds, the question is would you rather have a lower bound of 0 (which is an entirely "natural" number that is already in the game for subsequences of length 0), or would you introduce an entirely new number for the upper bound, (size + 1), which also needs an addition operation to compute?
I don't care about definitions of natural numbers. As demonstrated, numbering elements as offsets makes a lot of sense for purposes of indexing, and if you insist on starting with 1, then you need to either lower your base pointer by 1, or subtract 1 at every indexing operation (both is not exactly simple), and you frequently need to add 1 to the size.
I have no issues with zero-based indexing, and I don't think I've ever had to write quirky code. The most "ugly" thing is that the last element must be indexed as (size - 1) (which also makes some sense, since the last element need not necessarily exist. The subtraction makes clear that this is dangerous).
> Which is more unnatural, 0, -1, NSNotFound?
I normally handle that as -1 which is absolutely ok, especially given that this value is special. I concede that you might prefer 0 since that aligns well with evaluating truthiness of integers. It also makes a lot of sense (mathematically/programmatically) to use "size" (i.e. one-past-last index) as not-found, but that breaks when arrays are resized.
Note that I rewrote his argument a little here, because it is not really about natural vs unnatural, but more about looking at the difference between upper and lower bound. Now, if I may presume that we can agree that the difference of the bounds should equal the number of elements, leading to left-inclusive and right-exclusive bounds, the question is would you rather have a lower bound of 0 (which is an entirely "natural" number that is already in the game for subsequences of length 0), or would you introduce an entirely new number for the upper bound, (size + 1), which also needs an addition operation to compute?
I don't care about definitions of natural numbers. As demonstrated, numbering elements as offsets makes a lot of sense for purposes of indexing, and if you insist on starting with 1, then you need to either lower your base pointer by 1, or subtract 1 at every indexing operation (both is not exactly simple), and you frequently need to add 1 to the size.
I have no issues with zero-based indexing, and I don't think I've ever had to write quirky code. The most "ugly" thing is that the last element must be indexed as (size - 1) (which also makes some sense, since the last element need not necessarily exist. The subtraction makes clear that this is dangerous).
> Which is more unnatural, 0, -1, NSNotFound?
I normally handle that as -1 which is absolutely ok, especially given that this value is special. I concede that you might prefer 0 since that aligns well with evaluating truthiness of integers. It also makes a lot of sense (mathematically/programmatically) to use "size" (i.e. one-past-last index) as not-found, but that breaks when arrays are resized.