Point 4 makes sense though , it would really help with polymorphism in Go ,without generics. This is the kind of subtle behavior that is upsetting at times and I really think this is where the language could improve, definitely.
For every "quirk" mentioned in this article there are very good reasons for it, which the author lacks the imagination or patience to try and reason through. Point 4 is perhaps the best example.
So given an interface Foo and a struct FooImpl which implements Foo, why can't we pass a slice of []FooImpls to []Foo? It's because the creators of Go are lazy and arrogant and incompetent and don't care right?
No. Consider the situation where a []FooImpl is passed to a function F(f []Foo). This function decides to change one of the elements of the slice to a different implementation, say FooImpl2, eg. f[0] = FooImpl2{}. Now the original slice of []FooImpl's has a FooImpl2 in it. Oops, we broke type safety. Now consider what it would take to make this work - you'd have to somehow guarantee that functions which take []Foo don't mutate the slice. Is that possible? Is it possible to do quickly, and still have a language that compiles very quickly? Or we could introduce "const" and all those things from C++ but that's a whole new level of complexity to the language. And what about the performance penalties? A struct is a different size than an interface, so what code is emitted during compilation, something that can handle iterating over []your_interface and []your_struct?
So next time you think "Gosh, this bit of Go really sucks" take a moment and think about why it was designed that way. Because the fact is, it was almost definitely designed that way, as opposed to just overlooked or neglected. You may not agree with the choice that was made, but with the people who work on Go, I can guarantee that there was a conscious choice that was agonized over and discussed endlessly, just not with you in the room.
I wouldn't put it quite so strongly, but yes; go makes design decisions that are addressing non-obvious problems that the designers encountered over several years of software development in other languages.
Consider the block against unused imports. Now consider an application made of hundreds of .go files, that relies on hundreds of libraries, each of which could itself be made of hundreds of .go files---all of which changes frequently enough that a total recompile may be necessary often (which might sound like a code-base you can imagine Mr. Pike would be familiar with). It's actually a non-trivial amount of work to churn through and ignore all the unneeded imports.
"But that's sacrificing developer prototyping velocity to solve a problem that most developers never see; at most, it should be a feature toggled by a compiler flag", one might argue. Well, sure... Now consider how many C / C++ libraries cannot be compiled with -Wall -Werror because the original developer had the option of building the library without those flags enabled and not enough people want to learn all the subtle details of the languages they use, they just want "the code to compile and be done with it." Now consider what that does to the problem of trying to build larger projects from these tiny pieces that were never actually sanitized for unneeded imports... Which will inevitably happen to code that works well enough, etc.
It's a line drawn in the sand, but it's a line drawn in the sand from hard-won experience with how little projects grow into big projects.
Besides, the language is so small that it's not very hard to write wrappers for IDEs that can quickly identify and kill unused imports automatically (probably also add them when needed, like the most common "oh GOD I need to put 'fmt' back in just because I'm pen-testing the outputs in this code I'm debugging").
> I wouldn't put it quite so strongly, but yes; go makes design decisions that are addressing non-obvious problems that the designers encountered over several years of software development in other languages.
Sorry to bring up this yet-another-generics-ranty-thing, but actually point 4 is a typical issue that can be nicely solved with generics, by defining a function that works on every type that implements Foo. In this way you can directly pass a slice of Foo to the function and achieve type safety at the same time. As stated by the author, there's a demand for manually constructing an interface-slice in Go, and this shows a good example why generics can be useful.
That's a different criticism. That's the "Go Needs Generics" issue.
I don't disagree with it, but I also agree that there hasn't been any good solutions proposed for generics that meet the requirements that are at the core of Go: namely, performance of the compiled program, performance of the compiler, and performance of the developer.
Just monomorphize the code. It really isn't a problem for compilation performance. (I predict 10%-20% overhead based on measurements in other languages, though note that I think the entire concept of measuring the compilation speed overhead of generics is pretty much hopelessly flawed to begin with.)
The reason why I think generics are relevant to point 4 is that generics can be thought as "first-class interfaces" which seem to be wanted by the author. Yes, the author suggests a method that is not acceptable to my taste (implicitly converting []struct to []interface), but if Go had generics he might have not complained about the lack of "first-class interfaces", because generics are in some ways "interfaces on steroids".
As for the core features of Go you mentioned, sorry again for my cynical wording, but I think they are a bit dishonest.
They stress on the performance of the program, yet being slower than C/C++ due to the usage of GC and dynamic dispatch all the time. It was even regressed in Go 1.5[1], because the new GC prefers latency to performance. So, they traded performance for something considered more important.
How about the performance of the compiler? Well, the speed of the Go 1.5 compiler is ~10% worse than the previous one, because they rewrote the compiler in Go (it was previously in C, for those who didn't know yet). Why did they do so? Because it has advantages: reduces the barrier to contribute to the compiler, improves the code quality, etc.. Yet they traded performance for something they consider to be important.
And the performance of the developer... While Go is not very terrible at developer experiences, it isn't very good either. Embracing simplicity led to lack of expressiveness, a typical example being generics. While this isn't entirely a bad thing, it is, again, a trade-off.
So they make trade-offs all the time. Then why are generics not considered to be good enough to make a trade-off? Because they think so. It is their opinion, sure, but claiming generics are impossible because performance isn't very convincing to me.
Sure, there's lots of trade-offs - you can't build anything significant without trade-offs. Also, you can still say performance is a value and trade it away for other things, eg GC. It's a bit silly to say that if something isn't as fast as C++/C right away then it doesn't take performance seriously. I don't think anyone's being dishonest in making such trade-offs - if you have multiple competing values then you need them.
The big difference with stuff like GC and re-writing stuff in go slowing the compiler time is that they are not inherent to the language. That stuff can and will improve. With generics if done incorrectly, it's more or less permanent. So I appreciate the conservatism there, and don't think it's fair (or accurate) to say that it's dishonest.
> The big difference with stuff like GC and re-writing stuff in go slowing the compiler time is that they are not inherent to the language.
Actually, I think GC is inherent to the language, and that's why I listed it as one of the trade-offs that Go made (not the Go implementation made). Almost every language construct depends on GC, e.g. append(), make(), and more. Even the following innocent-looking code:
func f() *int {
a := 1
return &a
}
is GC-dependent, because it would result in a dangling pointer in non-GC languages. But the code is perfectly fine in Go because GC is inherent to the language. It is not possible to make GC optional in Go, so the performance penalty will stay there forever, only alleviated as the implementation matures (or sometimes regressed, as Go 1.5 shows), unless you use only unsafe.Pointer all the time.
You can already define functions that work on every type that implements Foo, no? I've never coded in go, but I thought that was the whole point of interfaces.
And I don't see how adding generics would solve the problem of being able to add a FooImpl2 to a slice of FooImpls. Java has generics and they don't allow covariant generic collections for this same reason.