It feels obvious that that's where the term originated, but I've never seen it used as a definition. In a mathematical context, something is enumerable if it can be put into 1:1 correspondence with the integers, but it doesn't need to be defined by a canonical correspondence. This suggests that being a finite (in a programming context where the set of ints is finite) set of discrete values is the defining feature, not the representation.
> In most applications of int enums, the particular integers can be chosen at random
Iām not sure the definition of "enum" enforces how things are identified. Random choice would be as good as any other, theoretically. In practice, as it relates to programming, random choice is harder to implement due to collision possibilities. Much simpler is to simply increment an integer, which is how every language I've ever used does it; even Rust, whose implementation is very similar to Go's implementation.
But it remains that the key takeaway is that the enum is a value. The whole reason for using an enum is for the sake of runtime comparison. It wouldn't even make sense to be a type as it is often used. It is bizarre that it keeps getting called one.
Sum types can be put into 1:1 correspondence with the integers, barring the inclusion of a non-enumerable type in a language's specification that can be used in sum types. However I would observe that this is generally a parlor trick and it's fairly uncommon to simply iterate through a sum type. As is so often the case, the exceptions will leap to mind and some people are already rushing to contradict me in the replies... but they are coming to mind precisely because they are the exceptions. Yes, you can sensibly iterate on "type Color = Red | Green | Blue", I've written code to do the equivalent in various languages many times and most complicated enumerations (in the old sense) I do come equipped with some array that has all the legal values so people can iterate over them (if they are not contiguous for some reason), so I know it can be done and can be useful. But the instant you have a general number type, or goodness help you, a generalized String type, as part of your sum type, you aren't going to be iterating on all possible values. And the way in which you can put the sum types into a 1:1 correspondence won't match your intuition either, since you'll need to diagonalize on the type, otherwise any unbounded array/string will get you "stuck" on the mapping and you'll never get past it.
So while you can theoretically argue it makes sense to call them an "enum" I don't like it precisely because "enumerating" the "enum" types (being sum types here), in general, is not a sensible operation. It is sensible in specific, but that's not really all that special. We don't generally name types by what a small percentage of the instances can do or are, we name them by what all instances can do or are. A degenerate sum type "type Value = Value" is still a sum, albeit a degenerate one of "1", but nobody ever enumerates all values of "type Email = Email { username :: String, domain :: String }". (Or whatever more precise type you'd like to use there. Just the first example that came to mind.) There are also cases where you actively don't want users enumerating your sum type, e.g., some sort of token that indicates secure access to some resource that you shouldn't be able to get, even in principle, by simple enumerating across your enum.
If it's called an "enum" I want to be able to "enum"erate it.