Snarky response: that's more steps toward the long-held dream of the Google operations department: to be able to just issue all devs cheap commodity Chromebooks, because all the compute happens on a (scale-to-zero) Cloud Shell or Cloud Workstation resource.
Actual response:
• For dev-time iteration, you want local builds; for large software (e.g. Chrome), you make this work by making builds incremental. So it takes a few hours to build locally the first time you build, but then it's down to 30s to rebuild after a change.
• But for building releases, you can't rely on incremental builds; incremental builds (i.e. building on top of a cache from previous arbitrary builds) would be non-deterministic and give you non-reproducible builds, exactly what a release manager doesn't want. So releases, at least, are stuck needing to "build the world." You want to accelerate those — remote build infra is the way to go. Remote, distributed build infra, ideally (think: ccache on Flume.)
These remote/distributed builds do still cohere to the philosophy in the abstract, though — a remote build is not the same as a CI build, after all; the dev's own workstation is still acting as the planner and director of the build process.
It tries, but it's really more of an operational benefit (i.e. works to your advantage to enable build traceability and avoid compile-time Heisenbugs, when you the developer can hold your workstation's build-env constant) than a build-integrity one (i.e. something a mutually-untrustworthy party could use to audit the integrity of your build pipeline, by taking random sample releases and building them themselves to the same resulting SHA — ala Debian's deterministic builds.)
Bazel doesn't go full Nix — it doesn't capture the entire OS + build toolchain inside its content-fingerprinting to track it for changes between builds. It's more like Homebrew's build env — a temporary sandbox prefix containing a checkout of your project, plus symlinks to resolved versions of any referenced libs.
Because of this, you might build, upgrade an OS package referenced in your build or containing parts of your toolchain, and then build again, Bazel (used on its own) doesn't know that anything's different. But now you have a build that doesn't look quite like it would if you had built everything with the newest version of the package.
I'm not saying you can't get deterministic builds from Bazel; you just have to do things outside of Bazel to guarantee that. Bazel gets you maybe 80% of the way there. Running the builds inside a known fixed builder image (that you then publish) would be one way to get the other 20%.
I have a feeling that Blaze is probably better for this, though, given all the inherent corollary technologies (e.g. objFS) it has within Google that don't exist out here.
Actual response:
• For dev-time iteration, you want local builds; for large software (e.g. Chrome), you make this work by making builds incremental. So it takes a few hours to build locally the first time you build, but then it's down to 30s to rebuild after a change.
• But for building releases, you can't rely on incremental builds; incremental builds (i.e. building on top of a cache from previous arbitrary builds) would be non-deterministic and give you non-reproducible builds, exactly what a release manager doesn't want. So releases, at least, are stuck needing to "build the world." You want to accelerate those — remote build infra is the way to go. Remote, distributed build infra, ideally (think: ccache on Flume.)
These remote/distributed builds do still cohere to the philosophy in the abstract, though — a remote build is not the same as a CI build, after all; the dev's own workstation is still acting as the planner and director of the build process.