As I am slowly acquiring the habits of thought that make it possible to read and write FP code at a reasonable speed, I'm horrified by how much idiomatic Scala FP code relies on projecting certain assumptions onto types and operations that seem, at first glance, to be neutral mathematical abstractions.
For example, I find myself hating the Either type, because I feel like there is a socially established convention that one half of the Either is the type that matters, the value that you want, the value that is the point of the computation you're doing, and the other half is a garbage value that should never be directly handled. So I really feel like I should conform to the convention and reserve Either for cases where one of the possible types doesn't matter. But how often is it true that one side of the Either doesn't matter? People want me to encode success/failure in an Either type, but if I do that, are they going to treat failure with the care it deserves?
I often handle Either (and Option) using pattern matching when I feel it's important to give both code paths equal importance and equal visibility in the code, but people change it because flatMap is supposedly more idiomatic, and they believe that eliminating pattern matching from their code is a sign of sophistication.
I feel like this stems from a strong desire among FP folks for the happy path to be the only one visible in the code, and the non-happy path to work by invisible magic. Maybe there are some brilliant programmers who achieve this by careful programming, but there are people mimicking them who seem to rely more on faith than logical analysis. They just flatMap their way through everything and trust that this results in correct behavior for the "less important" cases.
I'm sorry that this turned into a bit of a rant, but I'm entirely fed up with it, and it accounts for a lot of what I dislike about the code I work with on a daily basis.
This was great, actually.
I don't program in Scala, but it was very interesting to hear about the difference between types as abstractions vs types as they are used.
For unfamiliar topics or when presented with uncommon insight, I believe rants, monologues, even diatribes are actually some of the best things to read.
> For example, I find myself hating the Either type, because I feel like there is a socially established convention that one half of the Either is the type that matters, the value that you want, the value that is the point of the computation you're doing, and the other half is a garbage value that should never be directly handled. So I really feel like I should conform to the convention and reserve Either for cases where one of the possible types doesn't matter. But how often is it true that one side of the Either doesn't matter? People want me to encode success/failure in an Either type, but if I do that, are they going to treat failure with the care it deserves?
There's always a tradeoff between making the happy path clear and making the error handling explicit. The whole point of Either is to be a middle ground between "both cases are equal weight and you handle them by pattern matching" (custom ADTs) and "only the happy path is visible, the error path is completely invisible magic" (exceptions). Given that people in Python or Java tend to use exceptions a lot more than they use datatypes, I'd argue that a typical Scala codebase puts more emphasis on actually handling errors than a typical codebase in other languages.
Where each case really is of equal weight, consider using a custom datatype (it's only a couple of lines: sealed trait A, case class B(...) extends A, case class C(...) extends A) rather than Either.
I've been working through 'Haskell From First Principles', and it turned on a lightbulb: the 'right' half of Either is the 'important' one because its type variable is free to change.
instance Functor (Either a) where -- a is fixed here!
fmap :: (b -> c) -> Either a b -> Either a c
fmap _ (Left l) = Left l
fmap f (Right r) = Right (f r)
As a general rule, the last type parameter of a type carries special significance: the same applies to Tuples.
You _can_ trivially construct a type where the two labels are swapped; it's just labels, Left and Right aren't intrinsically important, except insofar as they reflect the positions of the type arguments in written text.
This is also the reason we have the convention in Scala as well - the inference to partially apply the type works in a certain way. But I agree with the parent post, a more descriptive name would be better.
Right; ostensibly you could create a language that lets you easily poke holes in any slot of a type, but I'm not sure you necessarily /gain/ a lot in doing so except for confusion. It would take a lot more convolution to specify types and instances for every function application.
> I often handle Either (and Option) using pattern matching when I feel it's important to give both code paths equal importance and equal visibility in the code, but people change it because flatMap is supposedly more idiomatic, and they believe that eliminating pattern matching from their code is a sign of sophistication.
Isn't this the same as letting exceptions bubble up in a non-FP language?
You don't necessarily lose the stack trace. Typically the left side of an either is an Exception (or an error ADT that wraps one). When you want to handle the left case, you can log out the full trace as you would without Either.
The Monad instance for Either means that chaining them together with flatMap has a short-circuiting effect and the first failure will stop the rest of the chain from being evaluated. I find this actually makes it easier to know where your errors are happening, and also allows you to centralise your error handling logic.
Sure - you can implicit srcloc or capture the Exception, both of which preserve it; but that's not the default behaviour and it's not what we recommend to beginners.
If you go onto the scaladoc for Either today, you see a stringly-typed Either where they discard the Exception.
Hmm, when I first learnt Scala, I haven’t had too advanced FP knowledge, so I am yet to have first-hand experience with this sort of exception-handling and I’m yet to decide how good it is.
Compared to Haskell, it is probably better in some way because you have the proper stacktrace; but it “feels” impure a bit..
In a way Java’s exceptions are already an Either type with the result type and the thrown Exception (with “auto-decomposition”, unless checked exceptions) —- is the advantages like manual management of when mapping/flatmapping happens worth it in your opinion?
Nonetheless thanks for the heads up, I might try out Scala again with the exception handling model you mentioned!
It's supposed to work like that, but it's a lot easier to screw up. Smart people screw it up all the time, and it's hard to spot in code review, whereas average programmers have no problem avoiding swallowing exceptions once they realize it's important, and if they do mess up it stands out like a sore thumb in the code.
For example, I find myself hating the Either type, because I feel like there is a socially established convention that one half of the Either is the type that matters, the value that you want, the value that is the point of the computation you're doing, and the other half is a garbage value that should never be directly handled. So I really feel like I should conform to the convention and reserve Either for cases where one of the possible types doesn't matter. But how often is it true that one side of the Either doesn't matter? People want me to encode success/failure in an Either type, but if I do that, are they going to treat failure with the care it deserves?
I often handle Either (and Option) using pattern matching when I feel it's important to give both code paths equal importance and equal visibility in the code, but people change it because flatMap is supposedly more idiomatic, and they believe that eliminating pattern matching from their code is a sign of sophistication.
I feel like this stems from a strong desire among FP folks for the happy path to be the only one visible in the code, and the non-happy path to work by invisible magic. Maybe there are some brilliant programmers who achieve this by careful programming, but there are people mimicking them who seem to rely more on faith than logical analysis. They just flatMap their way through everything and trust that this results in correct behavior for the "less important" cases.
I'm sorry that this turned into a bit of a rant, but I'm entirely fed up with it, and it accounts for a lot of what I dislike about the code I work with on a daily basis.