I think exceptions are bad not because of the principle but because code is written by humans.
We use Go in our company and every intern learning it complains about the error handling at first.
Then they write a bunch of programs and those basically never crash. I had and intern write a somewhat complex service mangling large amounts of Geodata which was his first go service ever. That thing ran for years without crashing once (it died due to an hardware problem).
The reason is simple: Golangs stupidly verbose error handling forces you to constantly think about the unhappy paths. Every function is littered with error handling and it makes you aware of possible problems and rigorously forces you to not forget them.
Exceptions are the opposite: people do whatever and have someone else deal with the fallout later.
This leads to people not thinking about errors and then someone using catch(all)-> doNothing kind of "solutions".
On top of that exception based error handling is what i call transparent (like in invisible). So you can write a function using some other function without being aware the there could be exceptions thrown (unless your Language uses checked exceptions like Java).
One can not possibly know every detail and remember the constantly. So if something goes wrong, you end up with super deep stack traces at unexpected places. This is unlike Go code where the error usually is handled at the earliest point possible with (hopefully) reason and contingency.
> The reason is simple: Golangs stupidly verbose error handling forces you to constantly think about the unhappy paths.
> Exceptions are the opposite: people do whatever and have someone else deal with the fallout later. This leads to people not thinking about errors and then someone using catch(all)-> doNothing kind of "solutions".
Ha! One shop that I worked at early in my career considered such null checks (in .NET, a language with a VM that would throw instead of crashing if you attempted a null dereference) the height of software craftsmanship. They bragged about remembering to do it at critical junctures and made fun of other teams that didn't do it.
Brilliant! No exception, no problem! This was great - except it was slowly discovered that features were silently broken for months on end until users directly complained about them. Then things were still great because the manager declared that no error messages showed up anywhere, so nobody got scared, so nothing was wrong.
They were right in a sense: the same manager had hand-built a SOAP-based logging solution with a bespoke UI that required us to explicitly log everything instead of just dumping to the regular sink and having an aggregator consume it for later viewing in Splunk or otherwise. We only got to see stack traces by RDPing into the specific server and looking for them - we'd never have seen these exceptions regardless!
> Golangs stupidly verbose error handling forces you to constantly think about the unhappy paths. Every function is littered with error handling and it makes you aware of possible problems and rigorously forces you to not forget them.
That reminds me of this [1]:
> However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing. When it gets to be too much to take, figure out how to factor blocks out into pure functions (and don't let them slide back into impurity!).
We had one case when a developer realized his algorithm had a severe reliability problem when he realized he had no idea how to handle one specific error returned by one of the called functions.
I remember us discussing that if it was written in PHP, we'd completely miss the problem (we were in the process of switching from PHP go Go), because with exceptions, it wouldn't be as obvious.
I originally only wanted to point out that the cited benchmark doesn't actually run the same thing for both cases (`do_fib_throws(...) + do_fib_throws(...)` has no defined call order), but then looked at the assembly and noticed that they are very differently structured. It turned out that GCC only recognized `do_fib_throws` to be eligible for tail calls and did some more inlining, and putting `noinline` and `optimize("no-optimize-sibling-calls")` attributes reduced the gap to a more believable level (~50%). As tail calls are highly sensitive to the exact call sequence, this benchmark is not suitable for the claim without detailed analyses.
Yes, result types may result in a worse branch prediction among others. But that is rarely the primary performance issue caused by them, as you would expect the "unexpected" branch to be rarely taken anyway. The actual performance issue simply comes from the fact that it uses a more complex code in the typical path, so it may confuse the less sophisticated optimizer and prevent potential optimizations possible just like above.
As a counterargument, here's a good blog post (written for a Rust-centric audience) that gets into the technical downsides of exceptions (not the "error handling should/shouldn't be noisy" debate that's fundamentally subjective and unresolvable): https://smallcultfollowing.com/babysteps/blog/2024/05/02/unw...
This article feels like someone trying to find arguments and justifications in favour of an existing opinion/bias.
> Compare this to functional-style errors, where error handling is manual and super tedious. You have to explicitly check if the return value is an error and propagate it.
No, you don't. You can easily convert a functional style error into an exception on the call site explicitly (e.g. `std::optional<T>::value` in C++, or `.unwrap()` in Rust), propagate via language features (e.g. `?` in Rust) or library abstractions such as monadic operations.
The fact that you explicitly have to check the return value is a massive win in readability and ensuring that the "error case" was considered by the caller. That is paramount for the robustness of any codebase.
> But there’s much more: Allocations can fail, the stack can overflow, and arithmetic operations can overflow. Without throwing exceptions, these failure modes are often simply hidden.
Turns out there is a good use case for exceptions: exceptional and rare errors that cannot be reasonably handled in the immediate vicinity of the call site.
Exceptions and error types can and should coexist nicely.
> The classic example is syscalls, which usually follow C conventions.
Yes, C error handling from decades ago sucks at providing information and context for the source of the errors. This is not an intrinsic issue with error return types, it's just another thing that sucks about C and is the way it is because it's old.
> We parse an int somewhere, and an `IntErrorKind::InvalidDigit` bubbles up at the user.
How is that a bad thing? Either the user provided the string that should have been parsed, therefore it's useful information for them to know why it failed to parse, or they don't care much, and they can explicitly decide to convert the error into an exception and propagate it upwards.
Again, it forces the user to think about the error case, which is excellent!
> The following examples use C++ code, which allows us to compare both versions like for like: [...]
Now show a benchmark where the error rate is 50% or more.
> C++ code ... Now show a benchmark where the error rate is 50% or more.
C++'s policy of "exceptions should be exceptional" isn't a good answer to this either, and it introduces a lot of ugly edge cases when writing code. For instance, you have to make the judgment call of whether to handle commonly-expected errors as return values or exceptions (eg. checking if something exists, and returning it if it does).
In many cases it is impossible to make the right call. For instance, a "file not found" exception is no big deal when your library is being used in a GUI app where user actions are relatively infrequent, but if your library is used in a high-traffic server or for batch processing, suddenly all those "file not found" exceptions are eating a lot of CPU.
In general, you simply can't predict when users of your C++ code might find a case where a code path spams exceptions, tanking performance.
This problem also exists in other exception-using languages like Python and C#[0], but these languages tend to be used for less performance-intensive purposes (especially Python) so it doesn't come up as much, and generally exceptions are used even for expected errors.
Some of the details in the linked SO post are 15 years old(!).
Performance today is a very different story :)
With that said, exceptions are still very expensive and cost about 6-7us in .NET 8 and 3-4us in upcoming .NET 9. The cost will grow if the catch/catch with filter/multiple finally blocks are present and need to be executed up the call stack.
When it comes to I/O, the cost of "file no found" exception will be minuscule comparing to reaching to OS I/O stack. Luckily, many codebases today adopt one of the community packages to expose potentially failing operations with Result<T, E> instead. For some reason, the highest adoption is in the line-of-business code. Lower-level codebases often just do just Try* or T? patterns instead.
That's still roughly 7000-3000 times slower than a plain return. I'm not convinced the extra overhead of an entirely separate control flow mechanism will ever be worth it in any language that cares about performance. Especially since I haven't heard many complaints about the way Rust does things. And there's still room for new idioms, syntactic sugar, and tweaks to the type system to optimize the convenience/rigor of that model.
> When it comes to I/O, the cost of "file no found" ...
That was just an example. Maybe that code is just a read from a (cached in memory) sqlite db, or a "value out of a range" error before some math operation, in which case the exception can easily be the most expensive part, especially if all the exception junk dirties up the CPU cache and branch predictor.
> Some of the details in the linked SO post are 15 years old(!).
Sigh. Thanks for the correction. You'd think I would have a habit of checking the post date by now!
> That's still roughly 7000-3000 times slower than a plain return. I'm not convinced the extra overhead of an entirely separate control flow mechanism will ever be worth it in any language that cares about performance. Especially since I haven't heard many complaints about the way Rust does things.
Ah, I did not mean to make a case for or against exception-based error handling. Only that it's not as expensive anymore, and that other operations do not necessarily use it: int.TryParse, dict.TryGetValue, encoding.TryGetBytes, etc.
I think the way Rust does it with Result<T, E> and, most importantly, implicit returns is the error handling perfected. You can criticize the language for being on the more verbose side, especially if you are not writing an OS kernel or a driver, but in terms of error handling while preserving the richness of error state it is by far the best per LOC.
One thing to note is, as usual, the Rust style error handling is not free either and you end up paying with at least a single branch in a happy path for each call that may error out provided it's not inlined and error check is not optimized away, and the additional codegen that is needed for blocks that e.g. construct a particular error and return, something that exception handling does "outside" of regular executed code (yes, with disproportionately higher cost but still).
Go's errors were a response to exceptions in Java. Java exceptions failed in several key ways:
1) exception catch sections had no lexical access to variables declared inside the try {} block. This was papered over a bit with some resource auto-closing syntax, but fundamentally an error handling section of code should be able to introspect the state of variables in the processing where the error occurred.
YES you can move the variable declarations out of the try {} block but that is an annoyance, especially if you are expanding error handling to be robust, you must constantly move more and more local variables out of the block that they lexically and semantically belong to.
2) the try {} block imposes a big cost to casual error handling on a method invocation: it will force an indentation (which consumes precious horizontal real estate and may force excessive breakup of what should be a compact set of processing to more sub-functions), it involves a visually polluting keyword with the catch sections.
would have been really nice in java, it would have enabled more compact code that emphasizes the mainline processing, while visually placing the hopefully less common error processing off to the left or to an indentation.
try {} should have been reserved (along with #1) for truly complicated exception flow handling.
Perhaps Visual Basic had good ideas with error handling blocks occurring after labels, although my recollection and experience with VB wasn't deep: I don't remember if onError blocks had lexical access to local variable state.
I think Roc has the perfect balance. It allows ignoring the errors but also allows you to structure the code in such a way that you're forced to handle all error cases.
We use Go in our company and every intern learning it complains about the error handling at first.
Then they write a bunch of programs and those basically never crash. I had and intern write a somewhat complex service mangling large amounts of Geodata which was his first go service ever. That thing ran for years without crashing once (it died due to an hardware problem).
The reason is simple: Golangs stupidly verbose error handling forces you to constantly think about the unhappy paths. Every function is littered with error handling and it makes you aware of possible problems and rigorously forces you to not forget them.
Exceptions are the opposite: people do whatever and have someone else deal with the fallout later. This leads to people not thinking about errors and then someone using catch(all)-> doNothing kind of "solutions".
On top of that exception based error handling is what i call transparent (like in invisible). So you can write a function using some other function without being aware the there could be exceptions thrown (unless your Language uses checked exceptions like Java).
One can not possibly know every detail and remember the constantly. So if something goes wrong, you end up with super deep stack traces at unexpected places. This is unlike Go code where the error usually is handled at the earliest point possible with (hopefully) reason and contingency.