Yes, although for usage in an assertion like in TFA, you want SIGTRAP outside of a debugger to cause a core dump, which is the default, so you must not SIG_IGN it.
In priority order:
- msvc: __debugbreak()
- clang: __builtin_debugtrap()
- linux (and probably some other posix-ish systems): raise(SIGTRAP)
- gcc x86: asm("int3; nop")
- otherwise: you're out of luck, just abort()/__builtin_trap()
Sadly it does not work at all, as it was a joke :/.
But I did think about how it could work. It could work by peridically sampling the instructions the current thread is running and detect trivial infinite loops that way.
With more effort more complicated infinite loops could be detected, but in the end not all, due to the halting problem.
edit: Actually maybe halting problem is not (theoretically) involved if you have a lot of memory: take snapshots every after every instruction. Once you take a snapshot you have already taken previously, you have found a loop. However you would still might need to somehow account for (emulate and include in snapshot?) or disallow external IO, such as time of day.
Are you sure? The article makes the point that the nop is actually required for this to work in GDB because the instruction pointer might otherwise point at an entirely different scope.
I have to admit I didn't try it out though. Maybe this changed in the meantime and it is not needed anymore.
// Q: Why is there a __nop() before __debugbreak()?
// A: VS' debug engine has a bug where it will silently swallow explicit
// breakpoint interrupts when single-step debugging either line-by-line or
// over call instructions. This can hide legitimate reasons to trap. Asserts
// for example, which can appear as if the did not fire, leaving a programmer
// unknowingly debugging an undefined process.
(This comment has been there for at least a couple of years, and I don't know if it still applies to the newest version of Visual Studio.)
Regular POSIX programmers might scoff at the very idea that this is an issue, but I always found it rather tedious having to piss about just to get back to the actual place where the break occurred so that you can inspect the state. The whole point of an assert is that you don't expect the condition to happen, so the last thing I want is to make things any more fiddly than necessary when it all goes wrong.
If you're about to fail an assertion (which is semantically similar to a SEGV), you probably shouldn't care about messing up signal handling anymore ;)
FWIW the actual answer is that libraries should be good at reporting / returning errors to the calling application in a reasonable manner instead of tripping assert()s or crashing the entire process. Which makes the question moot because ideally the library has few asserts to begin with.
Asserts are to make your code more sensitive to defects, for example, checking function invariants. Ideally the library has more, not less, since they show care and thought has gone in. They should never be used to handle errors.
Another feature of asserts is that `-DNDEBUG` disables them.
Sorry, yeah, I'm just living in a world where asserts being misused to "handle errors" is unfortunately rather common. My argument is specifically against that kind of assertion, I should've been more clear.
When inside a debugger, requests to ignore `SIGTRAP` have no effect, unlike other signals. That's because this is literally what `SIGTRAP` is for.