* There's little sense of its overall abstraction (or how the abstractions you find fit together)
* Its configuration language is undecipherable (if you want to use operators liberally consider the hoops people who need to search up `<++=` have to go through)
* It's hard to work through the documentation. I can't put my finger on it, but I think because the abstraction(s) are hazy it's hard to work out how the documentation fits together
* Writing plugins is non-trivial
* Speeding up compilation is a non-trivial exercise in futility
* It's too clever by half...
Did I mention it invokes the compiler on its own project definition? :)
All fairly reasonable critiques, but a few are actively being addressed or have been addressed
> if you want to use operators liberally consider the hoops people who need to search up `<++=` have to go through
To be fair, the SBT maintainers have realized this for a while and deprecated them all in 2013, and removed them in the 1.0 release from 2017.
> Speeding up compilation is a non-trivial exercise in futility
Do you mean for the developer? It's more of a critique of scala then of SBT, but incremental compilation has gotten a huge speed up in the new Zinc compiler of SBT 1.0, and Lightbend has started actively benchmarking and improving compiler speeds.
:= is the standard SBT assignment operator. You see it everywhere, so its use is clear.
/ is used for directory traversal. This is not SBT-specific, it comes from the File class in the Scala standard library. This is not as common, but it is sometimes used to setup custom source directories.
% and %% are used all the time for declaring dependencies. % is the standard separator, and %% is used to automatically choose the library version compatible with the Scala version used in the project. There is also %%%, used in ScalaJS. You can use only % if you want to, the only thing the other two do is add a suffix to the dependency name.
So yeah, useful. Still, they could probably have done without these custom symbols.
Usually it is because as soon as the build gets a little complex, you need to define your own logic. In a minute, you are scripting your build tool with a Turing complete language. And of course users of language X want to script their build using X.
That just makes sense - define a build DSL inside your own language. For some reasons, though, the de facto build DSL used by Scala suffers by all the issues mentioned by Li Haoyi in his blog post
Of course it will be hard, but most build tools often do the same thing - no matter the language or ecosystem.
I haven't seen any real effort to try to standardize on APIs, so you could write your "plugins" in your language of choice and the "glue" code would be this language-independent build tool.
I'd say I've mostly given up on this. I have a Makefile for every project, which in turn of course uses the language-specific build tool, but for example when I wanted to use webpack (migrating from a mess of gulp+bower+npm from angular.js to just webpack for angular) I wrote a simple PoC Makefile with hacky sed-/awk-/lines and a lot of cp and cat in just a few hours (massive build script).
Then the result was working as intended and I could refactor all the stuff where there's a nice webpack plugin and get rid of my Makefile hacks. Now in the end the Makefile just provides a common starting point that has "executable docs" for every language specific tool. "make bootstrap/test/run/build", no matter what language the project is in - for the low cost of having to update the Makefile when you change something major or some arguments. Not advocating this for all orgs (please use what makes YOUR team happy) but I've heard nothing but positive things about this from my team.
It would be glorious if anyone could understand it. After reading their readme (https://github.com/dhall-lang/dhall-lang) for about 10 minutes, actually trying to understand it, I don't.
That's a problem. A big one.
Maybe you could explain for the feeble-minded such as myself:
- how would you represent a simple key=value in dhall?
- how yould you represent a dictionary in dhall (multiple key=value entries)?
- how would you represent an array in dhall?
- maybe a more complex structure, say, translate (by hand) a package.json file to dhall?
(Nevermind, I found the tutorial. But the readme page is IMO a big fail if dhall wants wide adoption. If it doesn't then, ignore my criticism :) )
Can you explain what you do that requires a Turing complete build language and couldn’t be done, for example, in the ‘make equivalent’ of the dtrace scripting language?
* Checking some condition before building something in a particular way.
* Retrying something if it fails.
* A hundred other different scenarios which crop up occasionally, require about 4 lines of turing complete code, and are an absolute bitch to handle in a language deliberately designed not to make it possible.
Ability to do if…else doesn’t imply being Turing complete. Neither does the ability to retry operations.
An easy way to see that is that “Turing complete” implies “can be used to simulate any Turing machine”, which in turn implies “suffers from the halting problem”. Consequently, any language that doesn’t suffer from the halting problem cannot be Turing complete.
So, for example, any language that doesn’t allow backwards jumps, or only uses loops with numbers of iterations that are provably finite at compile time isn’t Turing complete.
It actually does imply turing completeness, it just doesn't guarantee it. I know of no build language that builds in the ability to do conditionals and loops that isn't turing complete. At that point... why bother? Why not just use a good turing complete language and give it some libraries that make building software easier?
There's a lot of times and places where removing turing completeness makes complete sense - configuration, user stories, translation files and simple DSLs that operate in a very restricted problem space, etc. Building software isn't a restricted problem space.
Because those complete languages are Turing complete. That means, for example, that the build system cannot guarantee that a build will finish, even ignoring that individual steps may run forever.
Some argue Dtrace’s scripting language and PDF (and, IIRC, various packet filter languages) are popular because they aren’t Turing complete.
However, make adds little value in comparison to a "language-specific" build tool like SBT.
In SBT, I can add `crossScalaVersions := Seq("2.12.0", "2.11")` and everything will be (more or less) automatically compiled and published for both Scala 2.11 and 2.12. That's not something a more generic build tool can ever really hope to achieve.
everything will be (more or less) automatically compiled and published for both Scala 2.11 and 2.12
But, having to worry about minor version differences like this is a wholly unnecessary problem, and one that didn't really exist in the world that make comes from.
These days we build tools on top of tools to solve problems of our own making that just didn't exist 10 or 20 years ago. All the build tools, packaging tools, deployment tools, containers, blah blah blah. None of that crap is needed to deliver working software in the form of 1 statically linked binary + 1 text configuration file, which will suffice for any software in the world.
But having to worry about compatibility between minor versions is still bad.
People have come to expect that minor versions are interchangeable (modulo fixes for clear regressions and changes made in reaction to important outside events like major releases of Java).
> But having to worry about compatibility between minor versions is still bad.
It's also unavoidable.
In theory, either the people doing the minor version releases need to worry about it, make conservative changes, thoroughly test those changes against a large amount of code, etc. - or the people consuming the minor version releases need to worry about it and do much of the same thing. Either way, build tools that can handle compiler versioning is a win.
In practice, both the people doing the minor version releases and the the larger codebases consuming those minor version releases need to worry about it. "fixes for clear regressions and changes made in reaction to important outside events" is already a giant caveat that will lead to breakage at times, and even the best teams will occasionally fail to meet expectations.
It's not unusual either, afaik rapidly increasing x in x.y.z was very unfashionable before initial Chrome release. Many java projects (e.g. older Apache Foundation projects: Apache POI, etc) still follow that versioning scheme, it kinda looks like semver, but there is no commitment to preserving compatibility on y changes.
You have this pretty much in Gradle for the JVM. It's become the de-facto standard for building mixed-JVM language projects in my company because it has pretty good plugin support for pretty much everything JVM
Once you need to do anything moderately complex, people drop into os-specific scripts for language-specific tasks. I think people are just gonna realize the best build conf/file is a real-yet-simple language (Go?) whereby features are built as libs to depend on. I have built a couple of more complex cross-platform build scripts this way. We're developers, why can't we treat our build scripts like the rest of our code? (granted cargo is a great middle ground, but language specific)
Agreed, except for one thing - it works best when a build-tool language is mostly used declaratively, except for those little cases when you need imperative as well. I had great fun building Lake (build tool in Lua)
It also sucks for C and C++. You need to use a compiler flag to generate the header dependencies or risk subtle bugs happening during incremental compilation if you forgot to update them manually.
For those fed up of SBT, it's worth taking a look at CBT [0], which is an attempt to do this job more simply and intuitively using plain Scala code (no affiliation).
It's interesting that Martin Odersky (creator of Scala, and possibly also involved in SBT) responded in agreement, linking his own critique from several years ago.
I found Odersky's comments and concern less useful than Li Haoyi's. Li's focused on the unclear - and broken - abstractions, while Odersky's seemed to focus more on the misapplication of language features. Am I reading this incorrectly?
I think that's probably a fair comment (from my admittedly somewhat limited understanding). That said, both do seem to share a drive for simplicity and clarity at their core, and Li Haoyi's post does have the advantage of another 6 years of insight / subsequent changes to SBT.
I had an interesting conversation once, in a Scala meetup. I was talking with someone about SBT being a gigantic pain in the ass, and their faire point was "I like SBT, it's better than Maven !".
While SBT might be better than Maven for the long Scala compilation time, it's nowhere near the user-friendliness of more recent build tools in languages like Go, Rust, Javascript, or [put your favorite lang here].
It strike me how close the Scala community is to Java's.
Go doesn't have a build tool like SBT or Gradle, it doesn't even have an official package manager.
Go has a compiler and that's all. You could argue Go is easier to build that Java, but it's because makes a lot of assumptions about how you organize your code on your HD which can be a hindrance if it doesn't fit the way your company operates. As always, Go is simple for simple use cases and get complicates as soon as you try straying from what go developers decided for you, there is no flexibility with that language.
Javascript has tons of build tools that are all clunky and horrible to use.
which build tool does go have?
or JavaScript?
Rust, yes. but the others are more glue than a real build tool.
(and npm + gulp, grunt, webpack, browserify or whatever is way more painful than sbt, besides that there are better alternatives too all).
It is funny he mentiones Bazel in the article but some doesn’t consider Pants [1] at all.
For me, not just in a work context, Pants is a clear winner. Most SBT project already have a bit of mono-repo flavour: main and sub-projects. Pants brings this to the next level and and speed and other goodies on top of it.
Also, a lot of CI thinking has been put into the tool already so it’s a breeze to distribute the build across n shards ..
Why does the Scala community needs its own build tool? Why not just use Gradle? Or if something is missing, adding this to Gradle and then using Gradle.
You can build Scala projects with Ant, Maven and Gradle, but for some reason, a language community often gravitates toward the language-specific tools.
Considering gradle is heavilly used in Java, kotlin, (and groovy) while it's DSL is in groovy (while the back end is largely Java these days), that's not always true.
I don't think it was a conscious decision to not support Gradle, etc, it's just that sbt had a few highly motivated maintainers. This meant that sbt became the easiest tool to use when starting a Scala project.
sbt's incremental compilation was also very important for a language with long compile times and still-developing IDE support.
It would be great if Scala had great Gradle support. Nowadays some large Scala users maintain Gradle support for Scala to a good level.
It's possible that back in 2008 Gradle did not seem like an obvious choice (assuming it is now; I have no experience with it.) And after a community has effectively standardised on a tool, switching to something else isn't done lightly.
The Scala community provides Gradle plugins for scala itself (through zinc), scoverage, scalastyle, scalafmt, wartremover, scala protobuf integration, and scala-aware intellij project generation from your build.gradle.
When sbt came out, Gradle wasn't as popular, the choices were either Maven or Ant at the time. If you are now starting with Scala, you have more choices.
I've been happy Scala and Gradle user for years now and it keeps getting better.
Also tooling works like a charm with IntelliJ first class support for Gradle.
IMHO Gradle is in a sweet spot of making simple things really simple and not making complex things (integrating native, custom plugins, multi project, Java interop) unnecessarily complex. Also Gradle has been getting faster and more user friendly every release.
I have been trying to gather motivation to tip my toe in SBT but without success.
One solution I've used for an uncooperative proxy is to put my own proxy in front of it. Then I connect the tool to my proxy, which sets things up nicely for the real external-facing proxy. I've used privoxy for this, it's pretty easy to set up. Here's a post from some dude explaining what you might need to do: https://siderite.blogspot.com/2014/02/using-prixovy-to-forwa...
I guess it makes sense to have your own build tool written in the language when you do not want to depend on the JVM (scala-native, for example). Then you don't have to hope maven/ant have their own native implementation.
As a plus you get to write your own DSL which might be more acceptable by the language user than that of another language or XML. Of course you might also invent a bad DSL; it's no easy task.
Your build tool being built on the JVM doesn't preclude it from building non-JVM targets. See for example bazel, buck, and pants building a bunch of different languages.
I wouldn't go so far as rbt to say that it's unmaintained, but the development attitude is that of maintenance (features like Java 9 modules compatibility) rather than active development. Gradle has improved by leaps and bounds very quickly; the Gradle build daemon makes build tasks feel much quicker and powerful in day-to-day practice, and is clearly now the preferred tool for new projects.
Look, if you have a lot of pre-existing Maven builds then of course the story gets more complicated. Large enterprises usually have higher priorities than spending large $X on any kind of migration that's not absolutely necessary, including the costs for retraining dozens or hundreds or thousands of developers to use a new build tool and to overhaul tooling that was built on top of the old solution (e.g. deployment scripts invoking mvn deploy and Maven reports), and can often result in choosing options that are technically worse but organizationally better.
What I mean is when you have the freedom to choose a new technology stack because it's a new project - because a lot of companies still won't give you that freedom even with new projects.
Unless you have some objection against Gradle itself?
Maven doesn't require 2 GB, a SSD, a background daemon and different dependency commands to be fast.
Also Maven has never been four years in a row subject of build performance at multiple conferences.
For those that don't suffer from XML allergy, Maven is quite alright, IDEs can
provide completions that actually make sense and nice graphical tooling for dependency management.
It is. It doesn't even need to be updated that often as most of the functionality is in plugins. A large set of which are maintained by the core development team.
* It's slow
* There's little sense of its overall abstraction (or how the abstractions you find fit together)
* Its configuration language is undecipherable (if you want to use operators liberally consider the hoops people who need to search up `<++=` have to go through)
* It's hard to work through the documentation. I can't put my finger on it, but I think because the abstraction(s) are hazy it's hard to work out how the documentation fits together
* Writing plugins is non-trivial
* Speeding up compilation is a non-trivial exercise in futility
* It's too clever by half...
Did I mention it invokes the compiler on its own project definition? :)