> In another example, 1 nanosecond and 1 millisecond, separated by six orders of magnitude, are both effectively instant to a human... at least, as long as it's only happening once. Start looping on that and the differences rapidly become, ah, effective.
Well, yeah. Looping on some operation changes the complexity to n*(O(interion operation)). That changes the complexity of O(log(n)) to O(n·log(n)).
Also, anyone that's actually learned anything about time complexity analysis and order of magnitude notation will know that the constants which aren't shown can and often do swamp the other factors in practice, depending on the algorithm.
Finally, I think it's totally acceptable to take these analysis out of the theoretical realm of math and apply them to real world limits. If there's a hard limit for n such that it's not infinite, why not use that as an upper bound constant where applicable?
Well, yeah. Looping on some operation changes the complexity to n*(O(interion operation)). That changes the complexity of O(log(n)) to O(n·log(n)).
Also, anyone that's actually learned anything about time complexity analysis and order of magnitude notation will know that the constants which aren't shown can and often do swamp the other factors in practice, depending on the algorithm.
Finally, I think it's totally acceptable to take these analysis out of the theoretical realm of math and apply them to real world limits. If there's a hard limit for n such that it's not infinite, why not use that as an upper bound constant where applicable?