That first recommendation of pinning exact versions of each and every dependency is borderline insane. That's exactly what lockfiles are for. Which are used by default.
To be honest, NPM is a complete shitshow when it comes to this, and I wish each and every single person who had a hand in developing it have their keyboards taken away, and never be allowed to touch any developer tooling ever again.
The top answer has 3 updates to it, and links to 2 github issues, with conflicting information.
One says:
>If you run npm i against that package.json and package-lock.json, the latter will never be updated, even if the package.json would be happy with newer versions.
The other says:
>The module tree described by the package lock is reproduced. This means reproducing the structure described in the file, using the specific files referenced in "resolved" if available, falling back to normal package resolution using "version" if one isn't.
>This holds no longer true since npm 5.1.0, because now the generated module tree is a combined result of both package.json and package-lock.json. (Example: package.json specifies some package with version ^1.1.0; package-lock.json had locked it with version 1.1.4; but actually, the package is already available with version 1.1.9. In this case npm i resolves the package to 1.1.9 and overwrites the lockfile accordingly, hence ignoring the information in the lock file.)
So good luck figuring out what is true, but it seems to also depend on your version of NPM. Also, don't get me started on the presence of an entirely separate command "npm ci", which is supposed to be the one that is reproducible.
Figuring out what is true for npm v5 is quite the waste of time, given that we are currently at v11. And that's what this ancient stackoverflow thread is about. npm certainly has a troubled past, otherwise we wouldn't have yarn and pnpm and whatnot. But _today_, npm install works very reasonably with lockfiles.
You are right, but starting with an insane default (there is a lock file, but installing overrides and updates it), then introducing another command that does the expected behavior (npm ci), and then finally making that the default is at least confusing.
The lockfile is updated _after_ any new malicious version is downloaded and installed. If we pinned the exact version, `npm install` will _not_ download and execute any new published versions.
That's why we use `npm ci` or `--frozen-lockfile` to install the exactly versions as lockfiles. But, by default, the `^` operator and just `install` command will check registry for any new releases and download them.
The primary arguments against pinning versions are missing security updates and increased maintenance overhead. But given the patterns we've seen, the attackers really _hope_ we automatically install new releases
npm install does install the exact versions from the lockfile. Even though this misconception gets repeated in every single thread about npm here on hn. npm install will not randomly update your direct dependencies, let alone transitive dependencies.
How does this get repeated over and over, when it's simply not true? At least not anymore. npm install will only update the lockfile if you make changes to your package.json. Otherwise, it will install the versions from the lockfile.
> How does this get repeated over and over, when it's simply not true?
Well, for one, the behavior is somewhat insane.
`npm install` with no additional arguments does update the lockfile if your package.json and your lockfile are out of sync with one another for any reason, and so to get a guarantee that it doesn't change your lockfile, you must do additional configuration or guarantee by some external mechanism that you don't ever have an out of date package.json and lock. For this reason alone, the advice of "just don't use npm install, use npm ci instead" is still extremely valid, you'd really like this to fail fast if you get out of sync.
`npm install additional-package` also updates your lock file. Other package managers distinguish these two operations, with the one to add a new dependency being called "add" instead of "install".
The docs add to the confusion. https://docs.npmjs.com/cli/v11/commands/npm-install#save suggests that writing to package-lock.json is the default and you need to change configuration to disable it. The notion that it won't change your lock file if you're already in sync between package.json and package-lock.json is not actually spelled out clearly anywhere on the page.
> You've partially answered your own question here.
Is that the case? If it were ever true (outside of outright bugs in npm), it must have been many many years and major npm releases ago. So that doesn't justify brigading outdated information.
I mean, it's my #1 experience using npm. I never once have used `npm install` and had a result other than it changing the lockfile. Maybe you want to blame this on the tools I used, but I followed the exact installation instructions of the project I was working on. If it's that common to get it "wrong", it's the tool that is wrong.
My bad, it really annoyed me when npm stopped respecting lockfiles years ago so I stopped using it. That's great news that they eventually changed their mind.
However in rare cases where I am forced to use it to contribute to some npm-using project, I have noticed that the lockfile often gets updated and I get a huge diff even though I didn't edit the dependencies. So I've always assumed that was the same issue with npm ignoring the lockfile, but maybe it's some other issue? idk
Well there are other lockfile updates as well, which aren't dependency version changes either. e.g. if the lockfile was created with an older npm version, running npm install with a newer npm version might upgrade it to a newer lockfile format and thus result in huge diffs. But that wouldn't change anything about the versions used for your dependencies.
Yes. As someone who's using npm install daily, and given the update cadence of npm packages, I would end up with dirty lock files very frequently if the parent statement were true. It just doesn't happen.
Well but the docs you cited don't match what you stated. You can delete node_modules and reinstall, it will never update the package-lock.json, you will always end up with the exact same versions as before. The package-lock updating happens when you change version numbers in the package.json file, but that is very much expected! So no, running npm install will not pull in new versions randomly.
The internet disagrees. NPM will gladly ignore and update lock files. There may exist a way to actually respect lock files, but the default mode of operation does not work as you would naively expect.
1. This guy clearly doesn't know how NPM works. Don't use `--no-save` regularly or you'll be intentionally desyncing your lockfile from reality.
2&3. NPM 5 had a bug almost a decade ago. They literally link to it in both of those pages. Here[^1] is a developer repeating how I've said its supposed to work.
It would have taken you less work to just try this in a terminal than search for those "citations".
Those stackoverflow posts are ancient and many major npm releases old, so in other words: irrelevant. That blog post is somewhat up to date but also very vague about the circumstances which would update the lockfile. Which certainly isn't that npm install updates dependencies to newer versions within the semver range, because it absolutely does not.
I have some sympathy for people who are blindsided by surprising difference between a new tool and their old one.
This post is not eliciting sympathy. They're data consultants, who don't understand a very basic and fundamental aspect of the tool that they're using and recommending. If you're a consultant you have a responsibility to RTFM, and the docs are clear that LIMIT doesn't prune queries in BigQuery. And, also, the interface tells you explicitly how much data you're gonna query before you run it.
This post is also blaming Google rather than accepting their own part in this folly, and event admits the docs are clear on this matter. Cost-control in BigQuery is not difficult. One of the tradeoffs of BQ's design is that it must be configred explicitly, there's no "smart" or "automatic" query pruning, but that also makes it easier to guarantee and predict cost over time as new data arrives.
Yes the whole consultancy situation really is the icing on the cake - as the customer you pay for (alleged) experts in the field and get this as the result...
Wouldn't people who knew their tools perfectly well not even use a cloud service like BigQuery? At the level you expect them to use the tool, they could have created a big query engine themselves. Isn't the whole point of these tools is to make things easier?
Sorry but that's nonsense. Partitioning is THE central cost controlling mechanism in BigQuery and the docs clearly state this. And it's an easy to use feature, so I'm not sure what makes you think using that would be as challenging as building your own query engine.
> If that is all you're going to put there, then just leave it blank.
Well, I would love to. Unfortunately neither Play Store nor App Store allow you to do that... so "bug fixes and performance improvements" it is, 99% of the time.