Interesting article. I agree that this double language phenomenon of "biformity" can be a source of complexity, but I actually think the majority of complexity comes from one level higher: the paradigm, as the paradigmatic level is ultimately where assumptions about the modeling of problem spaces lie.
The go example in the article is actually an instance of this: kubernetes went ahead and implemented an oop system in golang—why? because they felt go's assumption about the problem space (that it can be modeled and solved as imperative programs) was not a good fit for their actual problem space.
Haskell's assumption of purity leads to a problem/solution space that's a good fit for problems that are primarily themselves pure and mathematical, but leads to complexity when it comes to having to solve problems that are not in this space (having to use monads or effects for io)
Java's problem/solution space assumes you can model everything as objects and classes and runs into complexity when we attempt to use it for problems that are actually better modeled by other means.
Many languages that are "multiparadigm" or "general purpose" really have an underlying problem/solution model driving the organization of programs. When our particular problem is not a good fit for this model, we have to contort and wind up spending more time dealing with language constructs themselves than actually expressing our problem and solution. Couple this with the fact that languages have different performance properties, which may also be a constraint you need to satisfy and things get...complicated. A lazy pure language might be the best modeling system for your problem (e.g. dealing with infinite sequences) but a non-starter due to memory constraints (not enough resources).
> kubernetes went ahead and implemented an oop system in golang
I don’t think this was ever an objective, can you clarify what you mean by “oop system” and where we implemented it?
We aggressively used composition of interfaces up to a point in the “go way”, but certainly in terms of serialization modeling (which I am heavily accountable for early decisions) we also leveraged structs as behavior-less data. Everything else was more “whatever works”.
> why? because they felt go's assumption about the problem space (that it can be modeled and solved as imperative programs)
You’d have to articulate which subsystems you feel support this statement - certainly a diverse set of individuals were able to quickly and rapidly adapt their existing mental models to Go and ship a mostly successful 1.0 MVP in about 12 months, which to me is much more of the essential principle of Go:
> Unknown to most, Kubernetes was originally written in Java. If you have ever looked at the source code, or vendored a library you probably have already noticed a fair amount of factory patterns and singletons littered throughout the code base. Furthermore Kubernetes is built around various primitives referred to in the code base as “Objects”. In the Go programming language we explicitly did not ever build in a concept of an “Object”. We look at examples of this code, and explore what the code is truly attempting to do. We then take it a step further by offering idiomatic Go examples that we could refactor the pseudo object oriented Go code to.
I have a lot of respect for Kris but in this context, as the person who approved the PR adding the “Object” interface to the code base (and participated in most of the subsequent discussions about how to expand it), it was not done because we felt Go lacked a fundamental construct or forces an imperative style. We simply chose a generic name because at the time “not using interface{}” seemed to be idiomatic go.
The only real abstraction we need from Go is zero-cost serialization frameworks, which I think is an example of where having that would compromise Go’s core principles.
I have nothing against go or kubernetes, I was simply citing the linked article in the thread, which at one point states:
> Golang] Kubernetes, one of the largest codebases in Golang, has its own object-oriented type system implemented in the runtime package.
I would agree that Golang is overall a great language well-suited for solving a large class of problems. The same is true for the other languages I cited. This isn't about immutable properties of languages so much as it is about languages and problems fitting or not fitting well together.
That’s fair - as the perpetrator of much of that abstraction I believe that highlighting the type system aspect of this code obscures the real problem we were solving for, which Go is spectacularly good at: letting you get within inches of C performance with reasonable effort for serialization and mapping versions of structs.
I find it amusing that the runtime registry code is cited as an example for anything other than it’s possible to overcomplicate a problem in any language. We thought we’d need more flexibility than we needed in practice, because humans involved (myself included) couldn’t anticipate the future.
Our bottleneck was serialization of objects to bytes to send to etcd. Etcd cpu should be about 0.01-0.001 control plane CPU, and control plane apiserver CPU has been dominated by serialization since day 1 (except for brief periods around when we introduced inefficient custom resource code, because of the generic nature of that code).
Huh, I'd have guessed that distributed synchronization of etcd (which goes over the network and has multiple events, and uses raft, which has unbounded worst case latency) would be the limiting step.
Speaking as a Haskell programmer, this is not a problem. You put “IO” in the function signature and “do” in the body, and that’s workable. Or some other Monad. You get so many choices, which is its own problem, but “having to use monads” is not a problem in practice.
There are other problems that make Haskell hard to work with. This just happens to not be it.
Sure, it was just an off the cuff example. And while I agree that monads are not horrible by any means I think you are oversimplifying a bit. Monad transformers, the mtl library, ect. all exist after all... Even if using monads is relatively painless I wouldn't say that it's easier than using a language that allows you to freely execute side effects wherever for programs that are highly interactive.
`IO` is all you need if you want to freely execute side effects wherever. `mtl` and friends are for when you want to restrict which effects are available where.
Haskell does let you freely execute side effects. It just asks you to label any function that does so. People get tripped up by it because it's different, but in practice, it's absolutely trivial.
The problem with monads is that they’re a term from abstract algebra. Haskell’s documentation talks about them from that perspective (including monad laws), which makes them quite natural for anyone who’s studied some abstract algebra in university.
Unfortunately, most programmers trying to learn Haskell have not taken any abstract algebra courses, so these definitions and laws are deemed insufficient. Regular people want to know the exact nature of a thing — not just a list of mathematical properties that hold — and so they insist that there must be something more. But there’s nothing there!
The reason that there are so many tutorials and analogies is because monads are relatively easy to understand, but unfamiliar, and they allow you to do a bunch of math (which you don’t have to do). Not hard enough to cause Haskell programmers any problems.
I'm not who you asked but I think the clearest example is lazy evaluation. Anecdotally speaking as someone who's helped teach a few classes in Haskell, students often have problems with laziness until they really adapt to the functional paradigm, and even then continue to have some problems.
The other common problem I see is not actually a Haskell problem but merely a problem Haskell exacerbates: many students fail to think through their types prior to starting an implementation. I frequently had students in office hours trying to force their way through a problem and when questioned about the types were unable to reason at the type level intuitively. I suspect this reflects larger problems in my university's undergraduate program, but I figured I'd comment on it anyway.
> good fit for problems that are primarily themselves pure and mathematical, but leads to complexity when it comes to having to solve problems that are not in this space (having to use monads or effects for io)
I've only heard that from people who haven't written Haskell or have written 1 app without soliciting code review.
lol. people get so upset when even very gentle criticism is leveled at their pet language (which in large part is positive! I literally said Haskell is a great fit for a whole area of application)
I have written a lot of haskell and it is my favorite language. It is a great language for domains that can be mathematically modeled and such applications can be very practical (e.g. programming language compilers) It is objectively better than pretty much every other language at this. As I also mentioned, Haskell is likewise great for anything that naturally benefits from lazy execution. That said, is it as convenient to write certain classes of applications in haskell? No. I think you'd have to be a delusional fanboy to argue otherwise. You can be a big fan of a language and its ideas while still recognizing that it is not an all-encompassing solution or the best for every possible problem. This language fanaticism mindset is not rational nor what you want from engineers who should be deciding what language to use based on how well its properties fit constraints and not because its their personal favorite.
> That said, is it as convenient to write certain classes of applications in haskell?
Certain classes, of course. GPUs, high-performance computing, compile-to-javascript, GUIs- Haskell isn't terrible but there are better alternatives.
But usually what people mean when they go on about how Haskell is "mathematical" and "not pragmatic" is that it's worse than C/Java/Python for ordinary general purpose programming: CRUD, web backends, command line tools, etc. And this simply isn't true.
> people get so upset when even very gentle criticism is leveled at their pet language (which in large part is positive! I literally said Haskell is a great fit for a whole area of application)
As a real world programmer your comment is just another "Haskell isn't good for real world IO heavy problems" argument that ironically doesn't hold up in the real world.
Your concession of "good for pure and mathematical" just evokes "ah yeah, the ivory tower language unsuitable for real world applications". It ends up making your view seem balanced and authoritative while bolstering your overall point that you shouldn't use Haskell for serious non-academic things.
Haskell isn't my pet language, it's the language that pays my rent :)
> Haskell's ... complexity when it comes to having to solve problems that are not in this space (having to use monads or effects for io)
Absolutely the opposite in my experience! Haskell is the best language for effect-heavy code precisely because of the fine-grained control over effects that it provides.
This is why I think devs should know multiple languages, and at least one language from each major paradigm. You don't want a toolbox with only one kind of tool in it.
> Java's problem/solution space assumes you can model everything as objects and classes and runs into complexity when we attempt to use it for problems that are actually better modeled by other means.
That's an old understanding by now. Just use interfaces and forget about inheritance.
The go example in the article is actually an instance of this: kubernetes went ahead and implemented an oop system in golang—why? because they felt go's assumption about the problem space (that it can be modeled and solved as imperative programs) was not a good fit for their actual problem space.
Haskell's assumption of purity leads to a problem/solution space that's a good fit for problems that are primarily themselves pure and mathematical, but leads to complexity when it comes to having to solve problems that are not in this space (having to use monads or effects for io)
Java's problem/solution space assumes you can model everything as objects and classes and runs into complexity when we attempt to use it for problems that are actually better modeled by other means.
Many languages that are "multiparadigm" or "general purpose" really have an underlying problem/solution model driving the organization of programs. When our particular problem is not a good fit for this model, we have to contort and wind up spending more time dealing with language constructs themselves than actually expressing our problem and solution. Couple this with the fact that languages have different performance properties, which may also be a constraint you need to satisfy and things get...complicated. A lazy pure language might be the best modeling system for your problem (e.g. dealing with infinite sequences) but a non-starter due to memory constraints (not enough resources).