Hacker Newsnew | past | comments | ask | show | jobs | submit | PhilipTrettner's commentslogin

It's actually not a typo. Our "real" internal code starts with integer bounds on the inputs (say 2^26) and then computes for each subexpression how many bits are actually needed to exactly represent that. That can even lead to fractional bits (like in "a + b + c"). The generated code then rounds up to the next 64 bit multiple.


See https://godbolt.org/z/bYb7a38dG

It's basically: long* and long long* (the pointer types) are not compatible, and uint64_t is the "wrong" typedef on linux, or at least inconsistent with the way the intrinsics are defined.


We use them for exact predicates in our mesh booleans library. To really handle every degenerate case we even have to go quite a bit higher than 128bit in 3D.


Sorry I'm a bit late to the party.

long and long long are convertible, that's not the issue. They are distinct types though, so long* and long long* are NOT implicitly convertible. And uint64_t is not consistently the correct type.

See: https://godbolt.org/z/bYb7a38dG

I'd prefer if the intrinsics use the same uint64_t but they don't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: