Usually it is because as soon as the build gets a little complex, you need to define your own logic. In a minute, you are scripting your build tool with a Turing complete language. And of course users of language X want to script their build using X.
That just makes sense - define a build DSL inside your own language. For some reasons, though, the de facto build DSL used by Scala suffers by all the issues mentioned by Li Haoyi in his blog post
Of course it will be hard, but most build tools often do the same thing - no matter the language or ecosystem.
I haven't seen any real effort to try to standardize on APIs, so you could write your "plugins" in your language of choice and the "glue" code would be this language-independent build tool.
I'd say I've mostly given up on this. I have a Makefile for every project, which in turn of course uses the language-specific build tool, but for example when I wanted to use webpack (migrating from a mess of gulp+bower+npm from angular.js to just webpack for angular) I wrote a simple PoC Makefile with hacky sed-/awk-/lines and a lot of cp and cat in just a few hours (massive build script).
Then the result was working as intended and I could refactor all the stuff where there's a nice webpack plugin and get rid of my Makefile hacks. Now in the end the Makefile just provides a common starting point that has "executable docs" for every language specific tool. "make bootstrap/test/run/build", no matter what language the project is in - for the low cost of having to update the Makefile when you change something major or some arguments. Not advocating this for all orgs (please use what makes YOUR team happy) but I've heard nothing but positive things about this from my team.
It would be glorious if anyone could understand it. After reading their readme (https://github.com/dhall-lang/dhall-lang) for about 10 minutes, actually trying to understand it, I don't.
That's a problem. A big one.
Maybe you could explain for the feeble-minded such as myself:
- how would you represent a simple key=value in dhall?
- how yould you represent a dictionary in dhall (multiple key=value entries)?
- how would you represent an array in dhall?
- maybe a more complex structure, say, translate (by hand) a package.json file to dhall?
(Nevermind, I found the tutorial. But the readme page is IMO a big fail if dhall wants wide adoption. If it doesn't then, ignore my criticism :) )
Can you explain what you do that requires a Turing complete build language and couldn’t be done, for example, in the ‘make equivalent’ of the dtrace scripting language?
* Checking some condition before building something in a particular way.
* Retrying something if it fails.
* A hundred other different scenarios which crop up occasionally, require about 4 lines of turing complete code, and are an absolute bitch to handle in a language deliberately designed not to make it possible.
Ability to do if…else doesn’t imply being Turing complete. Neither does the ability to retry operations.
An easy way to see that is that “Turing complete” implies “can be used to simulate any Turing machine”, which in turn implies “suffers from the halting problem”. Consequently, any language that doesn’t suffer from the halting problem cannot be Turing complete.
So, for example, any language that doesn’t allow backwards jumps, or only uses loops with numbers of iterations that are provably finite at compile time isn’t Turing complete.
It actually does imply turing completeness, it just doesn't guarantee it. I know of no build language that builds in the ability to do conditionals and loops that isn't turing complete. At that point... why bother? Why not just use a good turing complete language and give it some libraries that make building software easier?
There's a lot of times and places where removing turing completeness makes complete sense - configuration, user stories, translation files and simple DSLs that operate in a very restricted problem space, etc. Building software isn't a restricted problem space.
Because those complete languages are Turing complete. That means, for example, that the build system cannot guarantee that a build will finish, even ignoring that individual steps may run forever.
Some argue Dtrace’s scripting language and PDF (and, IIRC, various packet filter languages) are popular because they aren’t Turing complete.
However, make adds little value in comparison to a "language-specific" build tool like SBT.
In SBT, I can add `crossScalaVersions := Seq("2.12.0", "2.11")` and everything will be (more or less) automatically compiled and published for both Scala 2.11 and 2.12. That's not something a more generic build tool can ever really hope to achieve.
everything will be (more or less) automatically compiled and published for both Scala 2.11 and 2.12
But, having to worry about minor version differences like this is a wholly unnecessary problem, and one that didn't really exist in the world that make comes from.
These days we build tools on top of tools to solve problems of our own making that just didn't exist 10 or 20 years ago. All the build tools, packaging tools, deployment tools, containers, blah blah blah. None of that crap is needed to deliver working software in the form of 1 statically linked binary + 1 text configuration file, which will suffice for any software in the world.
But having to worry about compatibility between minor versions is still bad.
People have come to expect that minor versions are interchangeable (modulo fixes for clear regressions and changes made in reaction to important outside events like major releases of Java).
> But having to worry about compatibility between minor versions is still bad.
It's also unavoidable.
In theory, either the people doing the minor version releases need to worry about it, make conservative changes, thoroughly test those changes against a large amount of code, etc. - or the people consuming the minor version releases need to worry about it and do much of the same thing. Either way, build tools that can handle compiler versioning is a win.
In practice, both the people doing the minor version releases and the the larger codebases consuming those minor version releases need to worry about it. "fixes for clear regressions and changes made in reaction to important outside events" is already a giant caveat that will lead to breakage at times, and even the best teams will occasionally fail to meet expectations.
It's not unusual either, afaik rapidly increasing x in x.y.z was very unfashionable before initial Chrome release. Many java projects (e.g. older Apache Foundation projects: Apache POI, etc) still follow that versioning scheme, it kinda looks like semver, but there is no commitment to preserving compatibility on y changes.
You have this pretty much in Gradle for the JVM. It's become the de-facto standard for building mixed-JVM language projects in my company because it has pretty good plugin support for pretty much everything JVM
Once you need to do anything moderately complex, people drop into os-specific scripts for language-specific tasks. I think people are just gonna realize the best build conf/file is a real-yet-simple language (Go?) whereby features are built as libs to depend on. I have built a couple of more complex cross-platform build scripts this way. We're developers, why can't we treat our build scripts like the rest of our code? (granted cargo is a great middle ground, but language specific)
Agreed, except for one thing - it works best when a build-tool language is mostly used declaratively, except for those little cases when you need imperative as well. I had great fun building Lake (build tool in Lua)
It also sucks for C and C++. You need to use a compiler flag to generate the header dependencies or risk subtle bugs happening during incremental compilation if you forgot to update them manually.
At least makers of the make understood this.