Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SCons: A Software Construction Tool (scons.org)
94 points by rbanffy on June 9, 2019 | hide | past | favorite | 79 comments


The maintainers of SCons have long argued perceived performance and scalability issues do not exist (https://github.com/scons/scons/wiki/WhySconsIsNotSlow).

I've long been really excited about SCons but eventually decided to move away because it became unbearably sluggish for a large-ish codebase with >180K lines of C++ code split into many files. Another issue are cross-platform builds. SCons breaks every time there is a new version of Visual Studio, and it takes many months until an updated version restores compatibility.


That blog post is an impressive logical leap on the authors part.

I realise that the post is not from recently but the benchmarks are on seriously outdated software and hardwaee, those specs are over a decade old at this stage. His conclusions are also misleading, all of his graphs show that make is 2-10x faster than scons, and that make scales better than scons.he seems hell bent on proving there's no quadratic complexity, despite there being an order of magnitude of a difference!

On the other hand, I don't think it's reasonable to assume every build system will immediately support every compiler/IDE. At the end of the day, scons is an open source project and if that's the only issue stopping you using it, I'm sure they'd be happy to accept patches providing support rather than wait months.


Scons is almost 20 years old at this point. There are projects like waf that tried to fix it but ended up incompatible (and faster and more usable).

Scons has great ideas, but something either about the implementation or about reality fails to deliver.


As is cmake. Make is about to hit a mid life crisis, other projects like Ninja are faster and more lightweight, projects like WAF/Premake use a scripting language instead of a DSL, others liek FASTbuild claim to support distrobuted builds. At a certain point, sofradre has to be accepted for what it is, not what it claims to be.


I agree it's a peculiar system to use for benchmarking in 2018, but for comparison purposes it shouldn't matter much as long as the same software/hardware are used for both Make and SCons.

There's definitely some mental gymnastics and careful massaging going on to hide the performance problem. Would be a great example for a "How to lie with charts and graphs" article.


I think it is still invalid to consider for a benchmark. We don't consider a car track based on how quickly 10 year old cars can navigate it.

In the last decade we have gone from everyone needs an antivirus to Windows 10 is good enough for most people. We've had Spectre and meltdown, SSDs have become viable and been replaced by nvmE SSDs we've had a decade of OS development, filesystem development and hardware improvements.weve also had huge improvements in build ststems, Ninja was only relewaeed in 2012!

And yet, even after all of that, the TLDR from the article is "scons is an order of magnitude slower than make, but my hardware was the bottleneck before i figured it out scaled quadratically, therefore scons is not slow".


> On the other hand, I don't think it's reasonable to assume every build system will immediately support every compiler/IDE.

If your API is stable, why wouldn't it be reasonable to expect that it should work out of the box?


I don't think visual studio had ever claimed to have a stable API. 2019/2017/2015 have been good, but there was some definitely teething issues migrating to their current situation. That doesn't necessitate months of waiting - there is nothing stopping me or you adding support for an unstable API


Try WAF. It was created about twelve years ago as a fork of SCons because the author was dissatisfied with its performance problems. WAF is in my opinion a very capable build and configuration system but the main drawback is its small community. The person developing WAF isn't keen on "marketing" it.


The only place I have encountered WAF was at Bloomberg, where it was used to build the C++ foundation library (BDE) that Bloomberg uses in older projects in place of Boost and Std, both of which it predates.

WAF worked, and didn't seem especially slow, unlike Scons, but I frequently needed to code Python to make it do what seemed like pretty ordinary things.


The alternative I've enjoyed is Bazel.


Meson is basically SCons done right.


Personally I see Meson as an attempt to do CMake right. A big differences between Meson and SCons is that SCons handles the execution of the build graph, while Meson delegates execution to Ninja. The nice thing about the former is that while the graph is being walked, new nodes can be added. The nice thing about the latter is that it's incredibly efficient.


Why not both? :-) While the architecture is more similar to CMake, Meson's language reminds me of SCons---but it is a custom DSL and not Python, which makes things less surprising. For example in SCons a single-element list can be replaced with with the element itself, but that is a feature of SCons's builtins: if you write custom Python code, you can have confusing results when you place a string in a place where you must use a list of strings.

Also SCons is Turing complete and this makes it a bit of a stretch to call the language declarative; Meson keeps simple loops but is not Turing complete and the maintainers are okay with plans/pull requests that make Meson even more declarative. In Meson generally "there is only one way to do it", in SCons much less so which is ironic given it uses Python as the language.


All these systems annoy me with their dependencies... the only one that i kinda do not mind much is premake because it consists of a single self-contained executable. However even with that you need to have premake available. I'd prefer it if these programs took a page out of Autotools and created configuration scripts that only relied on whatever the target OS has available (plus the compiler) without having the need to install a separate build system since after a while you end up with a bunch of different build systems since everyone prefers theirs.


Meson's only dependency is Python, with no external packages needed---only the standard library.

The only annoying part of Meson is that it only supports a few compilers (GCC, clang, ICC, MSVC), and requires porting to new compilers unlike Autoconf.


Meson is very different to SCons. Meson is more like a better CMake.


It's not slow?

It takes it 30s, on a decent size codebase, just to tell you that there is "Nothing to do." There is order of magnitude difference between it and make. It's a complete travesty!


I’ve never heard of half these build systems - are they mostly used for Cxx heavy repos ?


Probably the biggest SCons project is Blender (unless they've switched to something else since last I worked on it).


Yes.


One of my favorite SCons features is that it determines the necessary build operations from file hashes rather than timestamps. Working on a project with a long link step, it was wonderful to be able to rephrase comments, rename local variables, or tidy code formatting and skip the link step, since scons saw the object files had the same hash.

A few times, the skipped link step revealed to me that the code change I had just made had no effect on the compiler output, usually because the compiler's optimizer was already doing the same thing.

Likewise, hash-based compilation made git operations much smoother, since things like rebasing or jumping back and forth between branches can update timestamps without affecting content.

I know it's possible to hack similar functionality into some other build systems by adding a layer of intermediate hash file targets everywhere. But I'm a bit surprised the idea hasn't seen broader adoption... Having build artifacts indexed by hash seems like it'd be a requirement for any system that eventually wants to support large, distributed builds.


Buck and Bazel both work using hashes, if you are looking for a more modern build system with this feature.


Waf uses hashes, too :)


Relevant section of Debian upstream guide: https://wiki.debian.org/UpstreamGuide#SCons

As a Linux distribution maintainer, I second that. There are not that many packages that still use SCons, but ones that do often require boilerplate and/or patches to honor standard environment variables like CC, CFLAGS, LDFLAGS. Cross-compiling a SCons package is a nightmare.

If you like that SCons uses Python, I'd suggest to try Meson. It does everything right and exposes a subset of Python API: https://mesonbuild.com


How easy is it to extend Meson for languages that it doesn't ship support for by default? That always seemed to be one of the strong points of SCons/something many modern build systems don't support so well.


Meson developers are highly opinionated regarding what "should" and "should not" be supported, and aggressively block anything that might be considered, or even lead to "badness". Sometimes this is justified, but it makes one's job very difficult if one disagrees with them. For example, they insist on completely integrating Rust into the build framework, but also (apparently) don't have the manpower to actually implement that, so, coupled with the fact that extending the language is impossible from within the language, practical Rust (i.e. any dependencies, any Cargo use at all) is difficult to impossible.


It's not possible in the way it's possible in SCons. You can basically use custom_target, but its usefulness is limited since there are no user-defined functions in Meson's subset of Python. CMake is a little better in this regard here, since it at least offers macros and functions.


I'm a build engineer and SCons is one of the gems in my tool belt. Where it truly shines is as a framework for building build systems. At its core, it's just a library for building and executing a DAG. What most SCons users work with is the standard-library of rules built on top of the core API, that makes it immediately usable as a "high-level" build system like Meson or Cmake. In my experience, it's unparalleled when you have to model an entirely custom build-flow in a clean way. I've used it to model build-flows for custom tool-chains that would have been a nightmare to reason about if they were written in GNU Make and outright impossible with a meta-build system.

The only other tools I've found to rival this flexibility are Gradle (see the Software Domain Modeling chapter of its documentation) and Shake (though having to write rules in Haskell makes it a hard pill to swallow).


MongoDB uses Scons to build their product, or did when I was there. MongoDB is coded in respectable modern C++, with an admixture of Java bad habits and a history of Google Style pessimization.

Among the most used articles on MDB's internal wiki is "Scons: It's Not That Slow!"

In fact it is really quite, quite slow, but a universally popular add-on (not officially "supported" when I was there), is a Ninja build scheduler that is blindingly quick.

So, the slowness only manifests when you need to recalculate the dependencies, which doesn't appear in the typical edit-build-test cycle. With Ninja, combined with Ccache and a distributed build tool (whose name escapes me ATM), coding at Mongo was not bad.


This doesn’t exactly shine well on mongo’s development practices. Why go through all of this trouble when you could use something that works well out of the box like CMake? Maybe they like wasting money.


The answer is the same as everywhere: history. Not just the product build, but continuous integration, test suites (test themselves are largely JS, of course) and release management, are all tied together with Scons and lots of custom Python.

It would just be a huge job migrating to something else, for little better possible result than "it still works". And it does already work.


I really loved SCons and used it for most of my personal projects, but beyond a certain project size it really starts to show performance issues. At work we used to use SCons for a major code base and even running it without any source changes took a good 30s.

We eventually switched to Blueprint.


For better or worse, a name like "Blueprint" is virtually useless without a link. Even googling is worthless, since it collides with...everything.


Duckduckgo for [blueprint build system] returns https://github.com/google/blueprint as the first hit, so I think it's that.

Edit: Google, for me, doesn't return that on the first page, but the first result is https://opensource.google.com/projects/blueprint which links to it.


Interesting. DDG for "blueprint build tool" doesn't get anything relevant, but "blueprint build system" works fine. Sigh.



Godot is a relatively big project, and it seems to work well for them.


Many Godot users are unhappy with SCons https://github.com/godotengine/godot/issues/16014


I guess I haven't reached that size (not surprising given what I use it for; mostly LaTeX compilation and PDF postprocessing). How many lines/files did you find to be a problem?


I hadn’t heard of blueprint before. What has been your experience?


I really like the idea of SCons. Writing Python is strictly better than using the CMake language or m4. But, quoting one of my colleges: "SCons spends 5min to figure out nothing needs to be done."


I have exactly the opposite experience: I do not want to write a program to compile my program.

Having used both scons and waf, and also having used make, cmake and cargo, I would take any of the last three over scons or waf in a heartbeat. I think cargo is far and away the nicest to use because there is very little to actually do to build your program. Using waf or scons was always an exercise of 'I added a header file, so now I need to figure out the scons/waf API in order to tell it what to do with it'.


I was an early adopter of CMake (although nowadays I mostly either use plain Makefiles or use languages that have a reasonable build-system integrated) and while at the time it did feel much more approachable than the autotools I remember being baffled by how utterly terrible the syntax was. It's ugly, noisy, redundant, quirky and didn't share any obvious similarity with an existing mainstream language. I wonder how much that cost them in terms of adoption.

At least autotools and m4 have the excuse of all that historical baggage and very broad portability.


Waf (https://waf.io) provides a Pythonic way of much more quickly doing nothing.


I would recommend you to not program your builds with the scons framework (not even normal python). See my larger comment. The problem is that you are one step away of creating a giant mess of unnecessary, ununderstandable mess from which you will never escape. Use less power for the parts of the system which are not your core value.


I worked long for years with scons and it is the worst and most terrible tool I've ever seen and I would only recommend it to my worst enemies.

At the beginning might look good, with some "apparently" nice principles, but beyond a certain point it will start growing in volume and complexity, like a cancer grows, until its technical debt devoures whole teams of people and impacts everyone performance and people starts quitting. Adding more people to the build responsibilities doesn't help because it's impossible to maintain it after more than 2 people have been involved in a large scons projects.

Together with the few fancy ideas that it has, there are other design principles which do not scale from the human standpoint. It's nature of python code for everything and the way builders and scanners are created, makes impossible to follow the code flow and next to impossible to find certain bugs unless the people who built the mess was a god of communication and an ascet in coding discipline. Every single aspect from that build, as soon as more than one target is compiled, will start growing unendlessly, moving the focus from the project's business code to build environment's code. It's design principles and monolitic structures together with its terrible underlaying implementation of that horrible API make impossible to work in a team with scons and the larger it gets the more it leads to giant unmaintaible build environments which end up crashing randomly and no one can explain why or how to make anything by themselves. And at some point the amount of features pythin users end up embeddign there end up creating so much technical debt that it makes feel impossible to justify the migration to a different build system; whereas in reality other build environments might solve almost all the non problems easier by skipping features no one wanted.

You might think I am exagerating because someone showed to you a scons hello world, but then I will reply you that I've witnessed 3 different teams at a large corporation escalating their scons based build environments to upper management because it became an interminable source of conflicts and a black hole of company's resources. The best case was when a full dedicated person in a team of 10 people does all the build related work, and when I say all related work, I trilly mean to write even the most basic aspects of every single build, leaving the rest of the team in the ignorance and uncappable of extending the build by themselves, or at best copy-pasting a snippet and praying that it works.

Seriously, this tool is wrong for teams of all sizes bigger than 2 people. It favors an unmaintainable mess only comparable with the worst spaghetti-worm-hole codebases.


IMHO SCons had some nice ideas but it falls down in practice in non-trivial projects. Wrestling with eels is easier than doing cross-platform builds in SCons. And while I do love Python, I think using a fully-fledged language is a mistake.

I think a declarative DSL with scripting hooks behind it is a better model. Having used many, many build systems over the years, I've never found one I thought truly got things right (though several came close).

I actually think the ideas behind bjam ought to be revisited. It had a declarative style, with scripting to support it. The implementation was awful sadly; the tiniest mistake or typo would send the parser into a tailspin and the documentation was truly confusing. Any errors were presented to the user at the scripting level, which was even more confusing. But the idea of having toolchains defined separately, and having traits and features was brilliant. There's a lot to learn from there.


I love new technology, but when it comes to build systems I am a stone cold conservative.

Maybe it's just me, but I prefer even the ugliest make files to scons and Gradle and what not. Because I have been burned too many times by "improvements" in the build system making my code so compiling.

Am I the only one?


I have used it in a large codebase, and it was terrible.

Everybody must write python code to build their C++ files in a project. What could go wrong?


In my experience SCons has the right design, but the implementation is incredibly slow. Buck and Bazel are superior options.


This is why CMake is spectacular (for all of its many, many, many warts): it spits out good, fast configurations for a large number of build runners; and they all tend to work. CMake + Ninja is spectacular if you want to spend less time compiling, but maybe not ideal if you are not familiar with CMake idiosyncrasies and want to configure it for something. I usually find that I can add a pkgconfig-based dependency to a CMake project without much fuss, but anything much more complicated than that I typically do not enjoy.


I'm the current SCons project Co-Manager. Nice to see spirited conversation.

A couple data points on performance. A current snapshot of MongoDB's repo will do a null incremental build in 53 seconds on my i7-2600 with non-ssd drive. (Null incremental build = fully built tree. no need to rebuild anything).

In the last few releases we knocked that down from about 63 seconds.

Currently we're working on a number of performance related items and hope to make some strides here.

Note the article sited is fairly old: https://github.com/scons/scons/wiki/WhySconsIsNotSlow (from before 2.4).

That said there are (currently) faster build systems. SCons tends to be most valuable when the build is complicated.

So as you can see.. the current maintainers don't argue that there are no performance issues with SCons. But we're working on them.


Nooooooooooo, please!

Try Meson instead:

https://mesonbuild.com/


Bleeeech. CASE design? No thanks.

ihmo CASE (really automated policy configuration) is what people are now calling "AI". It's really just a computer augmenting and assisting a human's decision making.


The main comment here seems to be sluggish performance.

Is it some sort O(n^2) rule handler deep inside the builder? Or is it just "python-slow" ? :/

Can someone give an example of a big open source project which is too slow to build?


I was on the original SCons team. It’s a lot of “python slow” but if someone build a daemonized version of the build graph that could be updated incrementally from filesystem change events it could be great. (A la Buck and Bazel)


I think there’s a lot of exaggeration in the comments about SCONS being slow. We use it for a 300K LOC codebase, and while it is not as fast as cmake, it’s still perfectly serviceable. To be honest, for me the flexibility to do non-trivial customization of the build directly using Python code is well worth the somewhat slower performance. I always wonder what kind of build process people use if the overhead of the build system outweighs the compile time itself.


No, Scons really is dog-slow. How slow can be discovered by using the Ninja back-end plugin, with Ccache. Then it can be quite pleasant.

Of course if you only ever build from scratch, on one core, compile time will dominate, and you will wonder what the fuss is about.


>I always wonder what kind of build process people use if the overhead of the build system outweighs the compile time itself.

Using Ninja as a benchmark, most of these higher level build systems take a lot of time to determine they have little to do, while ninja instantly determines this. The build performance rarely matters, the incremental build performance is where you waste a lot of time. Especially if you do TDD cycles.



I'm starting to think build systems should be built like LLVM with separate front-ends and back-ends.

Imagine if CMake were separated into a front-end and back-end. You could use a super nice Scons-esque pythonic front-end instead of CMake's crappy proprietary language and still have all the benefits of CMake!


That's basically the way things already are. Current backends are things like GNU Make, Ninja, Xcode and Visual Studio. CMake is the frontend.


Well what I guess I mean is, I wish CMake were an API. That way I could use the language of my choice to drive CMake instead of using its quirky/esoteric language.

Ninja is indeed a build system backend but it's too low level. I want a intermediate to high level backend that abstracts common things I want to do.


But why do you think the API would look any nicer?


Well the API would have bindings for common languages


Quoting some old movie, "But you'd still have to kiss him! Eww."


I think this kind of feature set is what Bazel is designed to do. All languages are "rule sets" and you can plug in your own rules and every language already has rules.


The best thing about SCons is the build cache. I wish more build tools had that.


Please, no. My experience with the scons build cache is that it exists solely to introduce mysterious compile failures when the cache is out of sync. Eventually you get so distrustful of it that you end up just blowing away the cache as a matter of course.


That usually means your build graph and target signatures are incomplete and you need to figure out where you have a dependency that you haven’t told SCons about.


Exactly. Another way to say this is that if you have an incomplete build graph or target signatures, you will find out from your mysteriously corrupt build output.


It's a pretty old idea, going back to "derived objects" in ClearCase aka wink-ins.

If you think about it though, it's not hard to cons up a tool that files objects on disk by normalizing all the inputs to a compile step, and retrieving that object if it exists. Here's one but there's others.

https://www.commandlinux.com/man-page/man1/ccache.1.html


It might be ready for a revival. Bazel is halfway there.

http://beza1e1.tuxen.de/version_control_and_build_systems.ht...


> going back to "derived objects" in ClearCase aka wink-ins.

Now that's a VCS I have not heard mentioned in a long time. I shudder to think some entrenched legacy project is still stuck on ClearCase.


my experience is that it will end up corrupted and bringing lots of ignote errors no one could have explained, and finally disappear by removing all precompiled files, chaces and databases.


Can somebody summarize how ot compares to e.g. conan.io? I get the python aproach but yet another build tool?


> yet another build tool?

Scons is 19 years old...


[flagged]


Why? Can you please explain with some reasons?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: