It's actually not a typo. Our "real" internal code starts with integer bounds on the inputs (say 2^26) and then computes for each subexpression how many bits are actually needed to exactly represent that. That can even lead to fractional bits (like in "a + b + c"). The generated code then rounds up to the next 64 bit multiple.
It's basically: long* and long long* (the pointer types) are not compatible, and uint64_t is the "wrong" typedef on linux, or at least inconsistent with the way the intrinsics are defined.
We use them for exact predicates in our mesh booleans library. To really handle every degenerate case we even have to go quite a bit higher than 128bit in 3D.
long and long long are convertible, that's not the issue.
They are distinct types though, so long* and long long* are NOT implicitly convertible.
And uint64_t is not consistently the correct type.