I know people have opinions about cooldowns, but they would have saved you from axios, tanstack, and many other recent npm supply chain attacks. If you have Artifactory / Nexus, you probably already have cooldowns, but it's easy to set up if you don't.
Why cooldowns? Most npm (or pypi) compromises were taken down within hours, cooldowns simply mean - ignore any package with release date younger than N days (1 day can work, 3 days is ok, 7 days is a bit of an overkill but works too)
- or if you want a one click fix, use https://depsguard.com (cli that adds cooldowns + other recommended settings to npm, pnpm, yarn, bun, uv, dependabot and, I’m the maintainer)
- or use https://cooldowns.dev which is more focused on, well, cooldowns, with also a script to help set it up locally
All are open source / free.
If you know how to edit your ~/.npmrc etc, you don't really need any of them, but if you have a loved one who just needs a one click fix, these can likely save them from the next attack.
Caveat - if you need to patch a new critical CVE, you need to bypass the cooldown, but each of them have a way to do so. In the past few weeks, while I don't have hard numbers, it seems more risk has come from Software Supply Chain attacks (malicious versions pushed) than from new zero day CVEs (even in the age of Mythos driven vulnerability discovery)
Yikes. You are correct. Honest truth, I got a few downvotes (after a few more upvotes), thought this was the cause, but you’re right. Didn’t think that it matters much, I’ll add it back. Had no idea anyone noticed. Fair enough, thanks for keeping me honest.
More akin to letting astronauts stay in quarantine for a day in case they caught space bugs.
If every other week I would notice the FDA recalls a popular brand that would have taken over my brain and transmit my bank password and SSN to a stranger, I might prefer drinking week old milk.
Edit: not dismissing your analogy, it’s pretty much it.
Teams should be able to say "at least N developers have to agree to a release before it happens." This should be a policy they can control and lock down with a non developer account.
Interesting idea, but there are so many cases of solo maintainers.
I think that npm can have its own cooldown and automated security scan. Socket.dev, StepSecurity both close a gap here by spending tokens to scan new popular packages. Whether they do it for marketing or out of the goodness of their heart, is irrelevant. They don’t charge for this service, and it’s something I’d expect Microsoft (who owns GitHub who owns npm) to do.
The idea that 7 days is overkill is crazy to me. Unless you need a specific new feature, you should usually be fine with a dependency version that was released months ago when starting a new project. Ditto for doing regular dep upgrades.
The only issue I see is responding to vulnerabilities, where you want to upgrade immediately. But I think in that case it's fine to require the developer to be explicit in the new version they want.
What are the actual guarantees that go/Rust make that Python/npm don’t? It seems like it might just be that Python/npm are juicier targets? I’m starting to try and avoid all third party packages
I suppose that go's go:generate workflow can also be abused to land a worm like the ones spreading via npm, as you can build programs that just scrape the whole hard drive for git projects and patch the go.mod dependencies there, and you could also just write this in go as a toolchain script, for example.
NPM's achilles is the pre/postinstall step which can run arbitrary commands and shell scripts without the user having any way to intervene.
Dependencies must be run in isolated chroot sandboxes or better, inside containers. That would be the only way to mitigate this problem, as the filesystem of the operating system must be separated from the filesystem of the development workflow.
On top of that most host based firewalls are per-binary instead of per-cmdline. That leads to the warnings and rules relying on that e.g. "python" or "nodejs" getting network access allowlisted, instead of say "nodejs myworm.js". So firewalls in general are pretty useless against this type of malware.
Note that the NPM worms are spreading because the package providers are developing on their libraries without them noticing a malicious dependency. It is not users/consumers spreading the worm, it is developers spreading it.
Your mismatch is that you think in policies, not assessments here. Nothing in my normal go workflow will ask me if I want to run "curl download whatever from the internet" when I run go build.
Though I agree with the difference in workflow, there is not a single mechanism in go catching this. go.mod files can be just patched by the worm, and/or hidden behind a /v123 folder or whatever to play shenanigans on API differences.
> It seems like it might just be that Python/npm are juicier targets?
Attackers go where the victims are. Frontend is a monoculture with the vast majority using NPM; backend, less so. This isn't an excuse for NPM, but another strike against it.
You could also argue that the attacks make a deeper point about frontend vs backend devs, but I won't go there.
It has build.rs that will run as soon as you compile the dependency. That's not the same thing but pretty close to a post install script: it's very likely to run.
There is build.rs, proc macros are unsandboxed, and lastly you install the binary so that you can run it. Even if the build and install were fully sandboxed, the binary could still do malicious stuff if ran.
Even without post-install script, a malicious payload could be hiding in some function and just wait until the developer invokes `cargo run`. Not that many people audit the crates they pull into their projects.
Yeah no shit, if you download malicious code from the internet and run it on your computer you will get pwned. No matter if it’s from a package manager a zip file or a submodule.
However the current npm vulns used a post install script.
I maintain that NPM malware use postinstall scripts just because they exist and are convenient. Had NPM not had postinstall scripts, the malware would have used a different mechanism and been almost exactly as effective.
Generally, other package managers aren't great either. Notably, crates.io / cargo has some of the same issues as NPM and the verbiage of their excuses for not fixing these problems is oddly similar.
Something fascinating about the design and architecture of programming languages and their surrounding ecosystems is the enormous leverage that they provide to the "core team":
For every 1 core language developer[1]...
... there may be 1,000 popular package developers...
... for which there may be 1,000,000 developers writing software...
... for over 1,000,000,000 users.
This means that for every corner that is cut at the top of that pyramid, the harms are massively magnified at the lower tiers. A security vulnerability in a "top one thousand" package like log4j can cause billions of dollars in economic damage, man-centuries of remediation effort, etc.
However, bizarrely, the funding at the top two levels is essentially a pittance! Most such projects are charities, begging for spare change with hat in hand on a street corner. Some of the most used libraries are often volunteer efforts, despite powering global e-commerce! cough-OpenSSL-cough.
The result is that the people most empowered to fix the issues are the least funded to do so.
This is why NPM, Crates.io, etc... flatly refuse to do even the most basic security checks like adding namespaces and verifying the identity of major publishers like Google, Microsoft, and the like.
That's a non-zero amount of effort, and no matter how trivial to implement technically or how cheap to police, it would likely blow their tiny budget of unreliable donations.
The exceptions to this rule are package managers with robust financial backing, such as NuGet, which gets reliable funding from Microsoft and supports their internal (for-profit!) workflows almost as much as it does external "free" users.
"Free and open" is wonderful and all, but you get what you pay for.
[1] Most of us can name them off the top of our heads: Guido van Rossum, Larry Wall, Kerningham & Richie, etc.
At the same time, note that namespacing does nothing to prevent any sort of problem here. Namespacing is great for package organization and making provenance more deliberately obvious, but beyond that it's not a security measure.
The "culture" of NPM was firmly established long before the acquisition by Microsoft.
Similarly, there clearly isn't the same feeling of "ownership" over NPM and its giant pile of anonymously published packages as there is over NuGet where a substantial fraction of the traffic is Microsoft customers downloading Microsoft packages for Microsoft DotNet development on Microsoft Visual Studio for Microsoft Windows Server.
It is 100% up to the package manager's steward to control how ownership of packages and namespaces are granted.
Maven Central exists for decades the amount of incidents of people stealing namespaces is minimal.
One can't simply publish a package under the groupId "com.ycombinator" without having some way to verify that they own the domain ycombinator.com. Then, once a package is published, it is 100% immutable, even if it has malicious code in it. Certainly, that library is flagged everywhere as vulnerable.
It baffles me that NPM for so long couldn't replicate the same guardrails as Maven Central.
That is another important layer. Maven Central is not immune to credential theft. If a publisher token is stolen, an attacker may still be able to publish a malicious new version until the token is revoked or the account is suspended after reporting the problem to Sonatype.
But in the Maven/Gradle ecosystem, most projects pin exact dependency versions. Support for version ranges and dynamic versions exist, but they are generally avoided because they hurt reproducible builds. That means a malicious new release does not automatically flow into most consumers’ builds just because it was published.
I'd go as far to say that NPM should:
1. Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).
2. Disable version range support out of the box. User must --enable this setting from the command line at all times.
3. Remove support for install scripts completely. If someone wants to publish a ready-to-run software, there are plenty of other mechanisms.
Sonatype allows "io.github.<username>" as a valid groupId and has a process to verify ownership. I am sure other providers like GitLab can work on this.
You're missing the biggest root cause though, and that significantly hinders how well this translates between languages: the Java community has settled on fewer but large monolithic dependencies, whereas the JavaScript community has settled on many but small composable dependencies (for good historical reasons, but that's a topic in and off itself).
This directly influences how well e.g. version pinning works. In the Java world, package versions are _relatively_ independent from eachother and have few transitive dependencies, and as such version conflicts are relatively rare. This means you can get away with full pinning of all dependencies, with the occasional manual override of a conflicting transitive dependency.
This doesn't work in JavaScript. The dependency ecosystem is massively intertwined, if every library would specify exact versions you'd end up with literally hundreds of conflicts to resolve. That's not feasible. As a result, they've chosen the middle ground of using lock files in addition to version ranges.
This also hurts the effectiveness of verified namespaces: when packages come from hundreds of different sources, you're not going to notice 1 or 2 sketchy ones in there.
Other consequences of the big monolithic packages in Java are that updates tend to be less frequent, and more often from large reputable venders. Both of these help to reduce the problem too.
While the JavaScript toolchain can definitely learn a lot from the Java toolchains, the problems it needs to solve are not the same, and thus solutions don't translate 1-1.
At least I hope that they'll get rid of install scripts, that's such a low hanging fruit that really should've be done a decade ago.
> At least I hope that they'll get rid of install scripts, that's such a low hanging fruit that really should've be done a decade ago.
How will that help? It's just going to break things that legitimately require them.
Instead of being infected upon running "npm install", you'll just get infected upon running "npm run" instead. The former is slightly more reliable but fixing that is just kicking the can down the road. Maybe we'll have a few days before the payloads get rewritten.
Part of the point the article makes is that most other popular languages have a comprehensive standard library. JS has an astonishingly small on. Rather than have one vetted set of libraries that ship with the language, applications either need to roll it themselves or pull from a 3rd party package repository. We've drilled NIH into people, so they tend to reach for packages. That's not necessarily a bad thing, but it often means they're pulling in more code than they need. The JS ecosystem has also favored smaller modules, so you need many of them. And everyone builds on top of that, leading to massive growth in dependency graphs. It's a huge surface area for things to go wrong, intentionally or not.
With many other languages, you have a lot of functionality out of the box. Certainly, there have been bugs and security issues, but they're a drop in the bucket compared to what you see in the JS ecosystem. With other languages, you have a much smaller external dependency graph and the core functionality is coming from a trusted 3rd party.
I'm not convinced that Python should be the standard for package management either. Earlier this week I was trying to publish a Python package for the first time wrapping a Rust library I wrote (for use only on Linux and Python 3.12+), and it literally took me hours to get from "I have a wheel that I can import and it works on my system" to "I have published that wheel and can install the package from PyPI on the set of systems that I'm trying to support and it actually works". Everything I've heard about this indicates that the situation for Python packaging is literally better than it ever has been before with the current tooling, so I can't even imagine how bad it was for the decades before. In comparison, having literally never touched npm before, I was able to publish a wrapper around the same library and validate that it was working in maybe 10 minutes (most of which were spent from not realizing that a certain tool was failing with a vague "file not found" error because I hadn't installed npm yet).
I'm not saying that npm is doing everything right, but I suspect that beyond the obvious low-hanging fruit that we hear about pretty consistently with npm there's probably a long tail of less obvious stuff that can be exploited that will not be specific to npm. The fundamental problems with supply-chain vulnerabilities aren't going to go away if npm magically became pip or go modules overnight.
What important functionality do you feel is missing from the commonly used JS environments (node and browser) that is causing people to install it as a third party dependency?
The issue isn’t that the functionality doesn’t exist, it’s always backwards compatibility with versions where it did not yet exist.
> Part of the point the article makes is that most other popular languages have a comprehensive standard library.
Both the Browser and Node.js standard library are fairly extensive. I don't think there's much you can do with other language you can't do with Node.js. And as a lot of newer languages have demonstrated (like zig and hare), you don't need an extensive one.
It used to be true. The early days of node were pretty paltry. I think a lot of developers and projects have picked up these dependencies by habit and accretion and have never factored them out.
My pet peeve is when a developer picks up a library for just a few lines of code, and it turns that this library picks up another one that's not even relevant to its core domain. Whenever you get to the leaves of the dependency tree, it usually turns into a joke. Byte sized libraries everywhere.
Like you have axios.js that decides in turn to depends on the "follow-redirects" library. IMO, the best move would be for axios to vendor the code. Same with "proxy-from-env" Just tiny libraries scattered all over the web. Something like axios, should purely depends on the runtime library.
"What are the actual guarantees that <guy leaving his keys on his dashboard> make that <guy leaving his keys on an illuminated blinking sign outside his house> don't make?"
There has been a lot of pain at my various jobs installing a safe global npm config on every developer machine, asking people not to disable it, checking it with mdm tools. A safer out-of-the-box configuration is long overdue.
What do you mean by safe config? If you're trying to mandate a cooldown period or a whitelist/blacklist of packages, the correct approach is to configure a company-controlled registry that pulls from the upstream npm registry while enforcing your desired policies.
There is no legitimate reason why postinstall scripts need to exist. The npm team needs to grow up and declare "starting with npm version whatever, npm will only run postinstall scripts for versions of packages published before ${today}".
This doesn't really fix the issue though because package code is also executed at build time and during testing. Just maybe restricts the scope a little bit.
There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
Its childish to believe that because you can't fix everything you shouldn't fix anything. Defense in depth.
> There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package)
You don't need to test a compromised package to have it execute code. Importing it anywhere in your tests is enough, even transitively.
It's for sure less likely to run but I doubt it's significantly different in practice.
install scripts are a distraction, just like package signatures are a distraction. adding/removing either feature has no significant impact on the wormability of this package ecosystem. installed npm code is run, with nearly zero exceptions.
The installed code may be run in different settings, under a different user, with different privileges. Say, it may not run in CI/CD at all, or run only with the test user's privileges.
Postinstall scripts run at install time, with installer's privileges.
> There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
With respect, post-install scripts are a total red herring. You're alarmed by them because they are code controlled by someone else that runs on your box, and they could do something bad -- yes, they are, and yes they could.
But so is the regular code in those packages! It won't run at install time, but something in there will run -- otherwise it wouldn't have been included in the dependencies.
Thinking that eliminating post-install scripts will have more than a momentary impact on exploitation rates is a sign of not thinking the issue through. Unfortunately the issue is much more nuanced than TFA implies -- it's not at all a case of "Let's just stop putting the wings-fall-off button next to the light switch", it's that the thing we want to prevent (other people's bad code running on our box) cannot be distinguished from the thing we want (other people's good code running on our box) without a whole lot of painstaking manual effort, and avoiding painstaking manual effort is the only reason we even consider running other people's code in the first place.
The time difference does matter though. There were some recent worm attacks in NPM that spread very quickly because they used post-install. I don’t remember how long it took NPM to block the packages but it was probably around 30 minutes or so? If it wasn’t for post-install then that same attack would have a much slower spread and thus a smaller blast radius.
> There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
I audited several postinstall scripts recently in popular packages. They seem to be mostly around using native binaries, downloading them, detecting if the platform is compatible, linking to it directly instead of having it bootstrapped by node, working around issues in older versions of npm, etc. Since dev toolchains (e.g. esbuild) are now being built in compiled languages and distributed as binaries via npm registry. If you are on a recent version of node/npm and a common/recent OS/platform, you should be able to disable all the postinstall scripts without legitimate issue.
I’m using nix for managing npm dependencies in a project and it seems like I accidentally got some protection from these attacks because of the nix sandbox.
Looks like I got more than I begged for.
I use C++ and Conan with my own recipes and pre-built artifacts.
This mitigates things to a great extent.
I do not know who thought that having your dependencies depend on the internet with a zillion users doing stuff to each package was a good idea for enterprise environments...
It is crazy how much things can get endangered this way.
It's a cultural issue, always feeling the urge to update to the newest possible package for things that are already working fine, without even reading the changelog to see if it's applicable. Cooldowns are only a way to force a bit of patience onto the maintainers... and they work.
That, and package owners updating stuff that needs no updating just to look not stale/unupdated. I can use lisp packages without changes for 15 years fine, but a js one is unmaintained! oh no! Even though it was done 15 years ago, so they add nothing, sometimes a breaking change, to up a version on npm and github and look maintained. And then everything will update.
It is just easy pickings to blame npm specifically. Yes, while they do share some part of the blame, no package manager is immune from attack and certainly not ones where the attackers exploited being able to extract out secrets from a developer's environment variables or files. Seems more like developers should be managing their secrets better?
I also find that using the meme that this title snowclones is in bad taste too.
Security doesn't exist in absolute. It's about relative effort. Exploiting Debian's package management requires quite a bit of effort, NPM, while being funded by Microsoft, only need to have a token stolen. And postinstall scripts were decried as a security risk for a long time
I think people are overlooking the fact that the javascript ecosystem is run by perpetual beginners who are probably using 5 different SAAS credential managers and still manage to check their creds into a public git repo. No wonder there are so many breaches. Rust developers otoh are typically experts and don't get pwned so easily.
No surprise here. That's what you get when you have a language/ecosystem where core devs refuse to fix fundamental flaws, cuz for them breaking backwards compatibility is the worse crime that can ever be committed. And so all that happens in JS-land will eternally be layering lipstick on the pig in the cesspool. Too afraid of going through something similar to the Python 2 -> 3 fiasco, I guess because too many web devs and site admins would be incensed at being forced to fix their broken universe; as if it isn't already broken in its current condition.
The NIH mentality in the ecosystem would result in a JavaScript pgp library which itself would be an npm package and subject to supply chain attacks. lol.
A good part of it is already implemented in web crypto, which is supported by browsers and node. There is a chance that npm could implement something there without extra dependencies. Maybe I'm too optimistic?
Would that help? Most of these recent attacks, the attackers have gained access to the system that builds the packages. So it would have just signed the malicious build the same.
nope, doesn't help. signatures and removal of script points have zero net effect on the value of the target that the ecosystem has, or how easy/hard it is to write a worm. the package code gets run, this is statistically true, and the exploited developers/environments will sign packages, this is also statistically true.
Probably the same reason that pretty much no other package manager (or even major email provider, when email is ostensibly the most famous use-case for it) has adopted it: the UX is atrocious.
The answer is LLM inspection. Which, sadly, raises the cost of software, especially once evil LLMs start hiding the backdoors better. Long term the answer should be CHERI, in my opinion.
subtree is better for this case, you want to encourage actual reading before running. reading won't catch everything but it catches a lot, and the burden isn't as high as people always complain about before they try it.
For those unfamiliar with the context: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
Same vibe: https://www.youtube.com/watch?v=lOTyUfOHgas
the onion article is still up could link that
https://theonion.com/no-way-to-prevent-this-says-only-nation...
https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
Which one?
I know people have opinions about cooldowns, but they would have saved you from axios, tanstack, and many other recent npm supply chain attacks. If you have Artifactory / Nexus, you probably already have cooldowns, but it's easy to set up if you don't.
Why cooldowns? Most npm (or pypi) compromises were taken down within hours, cooldowns simply mean - ignore any package with release date younger than N days (1 day can work, 3 days is ok, 7 days is a bit of an overkill but works too)
How to set them up?
- use latest pnpm, they added 1 day cooldown by default https://pnpm.io/supply-chain-security
- or if you want a one click fix, use https://depsguard.com (cli that adds cooldowns + other recommended settings to npm, pnpm, yarn, bun, uv, dependabot and, I’m the maintainer)
- or use https://cooldowns.dev which is more focused on, well, cooldowns, with also a script to help set it up locally
All are open source / free.
If you know how to edit your ~/.npmrc etc, you don't really need any of them, but if you have a loved one who just needs a one click fix, these can likely save them from the next attack.
Caveat - if you need to patch a new critical CVE, you need to bypass the cooldown, but each of them have a way to do so. In the past few weeks, while I don't have hard numbers, it seems more risk has come from Software Supply Chain attacks (malicious versions pushed) than from new zero day CVEs (even in the age of Mythos driven vulnerability discovery)
yes, props to pnpm for adding 1 day cooldown by default in v11.
Seems like you dropped something:
> Disclaimer: I maintain depsguard
Yikes. You are correct. Honest truth, I got a few downvotes (after a few more upvotes), thought this was the cause, but you’re right. Didn’t think that it matters much, I’ll add it back. Had no idea anyone noticed. Fair enough, thanks for keeping me honest.
Edit: added it back, inline.
This is like buying something from the grocery store and then waiting a week to eat it in case the FDA put out a warning about it.
More akin to letting astronauts stay in quarantine for a day in case they caught space bugs.
If every other week I would notice the FDA recalls a popular brand that would have taken over my brain and transmit my bank password and SSN to a stranger, I might prefer drinking week old milk.
Edit: not dismissing your analogy, it’s pretty much it.
No it's not. That's a terrible analogy.
If there was a good reason to believe the pop tarts you buy might unexpectedly be contaminated with dioxins, waiting a week would be prudent.
Release escrow.
Teams should be able to say "at least N developers have to agree to a release before it happens." This should be a policy they can control and lock down with a non developer account.
Interesting idea, but there are so many cases of solo maintainers.
I think that npm can have its own cooldown and automated security scan. Socket.dev, StepSecurity both close a gap here by spending tokens to scan new popular packages. Whether they do it for marketing or out of the goodness of their heart, is irrelevant. They don’t charge for this service, and it’s something I’d expect Microsoft (who owns GitHub who owns npm) to do.
The idea that 7 days is overkill is crazy to me. Unless you need a specific new feature, you should usually be fine with a dependency version that was released months ago when starting a new project. Ditto for doing regular dep upgrades.
The only issue I see is responding to vulnerabilities, where you want to upgrade immediately. But I think in that case it's fine to require the developer to be explicit in the new version they want.
I agree, but in most recent cases a 1 day cooldown would have been enough.
I added a “how to bypass if you have to patch a zero day CVE” section to depsguard for all supported package managers.
What are the actual guarantees that go/Rust make that Python/npm don’t? It seems like it might just be that Python/npm are juicier targets? I’m starting to try and avoid all third party packages
I suppose that go's go:generate workflow can also be abused to land a worm like the ones spreading via npm, as you can build programs that just scrape the whole hard drive for git projects and patch the go.mod dependencies there, and you could also just write this in go as a toolchain script, for example.
NPM's achilles is the pre/postinstall step which can run arbitrary commands and shell scripts without the user having any way to intervene.
Dependencies must be run in isolated chroot sandboxes or better, inside containers. That would be the only way to mitigate this problem, as the filesystem of the operating system must be separated from the filesystem of the development workflow.
On top of that most host based firewalls are per-binary instead of per-cmdline. That leads to the warnings and rules relying on that e.g. "python" or "nodejs" getting network access allowlisted, instead of say "nodejs myworm.js". So firewalls in general are pretty useless against this type of malware.
`go:generate` is for the package provider, the command never runs when someone `go install` or `go get` the package.
Note that the NPM worms are spreading because the package providers are developing on their libraries without them noticing a malicious dependency. It is not users/consumers spreading the worm, it is developers spreading it.
Your mismatch is that you think in policies, not assessments here. Nothing in my normal go workflow will ask me if I want to run "curl download whatever from the internet" when I run go build.
Though I agree with the difference in workflow, there is not a single mechanism in go catching this. go.mod files can be just patched by the worm, and/or hidden behind a /v123 folder or whatever to play shenanigans on API differences.
go:generate is done at dev time, not at build time.
Actually bindings are usually generated like that, at build time (though with a build cache that nobody knows how it corrupts all the time).
Examples that come to mind: webview/webview, webkit, cilium/ebpf and most other CGo projects that I have seen.
> It seems like it might just be that Python/npm are juicier targets?
Attackers go where the victims are. Frontend is a monoculture with the vast majority using NPM; backend, less so. This isn't an excuse for NPM, but another strike against it.
You could also argue that the attacks make a deeper point about frontend vs backend devs, but I won't go there.
Why would you even imply something like that?
I mean... most frontend devs I've worked with are crayon eaters.
Is this a dogfooding joke?
They feel the need to compete given that jokes about "backend" devs write themselves
Last I checked npm had 2FA for publishing, but cargo didn't. I don't think cargo is any better than npm, just not that of an attractive target.
To be honest Rust has the exact same supply chain attack pattern - it's just newer and more maintained at the moment. Give it a decade.
Rust doesn’t have post install scripts
It has build.rs, which has essentially the same problems.
They have build.rs (https://doc.rust-lang.org/cargo/reference/build-scripts.html)
It has build.rs that will run as soon as you compile the dependency. That's not the same thing but pretty close to a post install script: it's very likely to run.
There is build.rs, proc macros are unsandboxed, and lastly you install the binary so that you can run it. Even if the build and install were fully sandboxed, the binary could still do malicious stuff if ran.
Even without post-install script, a malicious payload could be hiding in some function and just wait until the developer invokes `cargo run`. Not that many people audit the crates they pull into their projects.
Yeah no shit, if you download malicious code from the internet and run it on your computer you will get pwned. No matter if it’s from a package manager a zip file or a submodule.
However the current npm vulns used a post install script.
I maintain that NPM malware use postinstall scripts just because they exist and are convenient. Had NPM not had postinstall scripts, the malware would have used a different mechanism and been almost exactly as effective.
Supply chain attacks are available to every language and framework that uses dependencies or modules you don’t control.
Programs in Rust (or almost every other language) normally have fewer dependencies by 2 or 3 orders of magnitude.
And that number tends to reduce even more when the ecosystem matures.
Generally, other package managers aren't great either. Notably, crates.io / cargo has some of the same issues as NPM and the verbiage of their excuses for not fixing these problems is oddly similar.
Something fascinating about the design and architecture of programming languages and their surrounding ecosystems is the enormous leverage that they provide to the "core team":
For every 1 core language developer[1]...
... there may be 1,000 popular package developers...
... for which there may be 1,000,000 developers writing software...
... for over 1,000,000,000 users.
This means that for every corner that is cut at the top of that pyramid, the harms are massively magnified at the lower tiers. A security vulnerability in a "top one thousand" package like log4j can cause billions of dollars in economic damage, man-centuries of remediation effort, etc.
However, bizarrely, the funding at the top two levels is essentially a pittance! Most such projects are charities, begging for spare change with hat in hand on a street corner. Some of the most used libraries are often volunteer efforts, despite powering global e-commerce! cough-OpenSSL-cough.
The result is that the people most empowered to fix the issues are the least funded to do so.
This is why NPM, Crates.io, etc... flatly refuse to do even the most basic security checks like adding namespaces and verifying the identity of major publishers like Google, Microsoft, and the like.
That's a non-zero amount of effort, and no matter how trivial to implement technically or how cheap to police, it would likely blow their tiny budget of unreliable donations.
The exceptions to this rule are package managers with robust financial backing, such as NuGet, which gets reliable funding from Microsoft and supports their internal (for-profit!) workflows almost as much as it does external "free" users.
"Free and open" is wonderful and all, but you get what you pay for.
[1] Most of us can name them off the top of our heads: Guido van Rossum, Larry Wall, Kerningham & Richie, etc.
You appear to have missed that NPM is owned by Microsoft.
In addition, crates.io has not flatly refused to support namespaces, there's an entire accepted RFC for it: https://github.com/rust-lang/rfcs/pull/3243
At the same time, note that namespacing does nothing to prevent any sort of problem here. Namespacing is great for package organization and making provenance more deliberately obvious, but beyond that it's not a security measure.
> NPM is owned by Microsoft.
I did not miss that.
The "culture" of NPM was firmly established long before the acquisition by Microsoft.
Similarly, there clearly isn't the same feeling of "ownership" over NPM and its giant pile of anonymously published packages as there is over NuGet where a substantial fraction of the traffic is Microsoft customers downloading Microsoft packages for Microsoft DotNet development on Microsoft Visual Studio for Microsoft Windows Server.
It is 100% up to the package manager's steward to control how ownership of packages and namespaces are granted.
Maven Central exists for decades the amount of incidents of people stealing namespaces is minimal.
One can't simply publish a package under the groupId "com.ycombinator" without having some way to verify that they own the domain ycombinator.com. Then, once a package is published, it is 100% immutable, even if it has malicious code in it. Certainly, that library is flagged everywhere as vulnerable.
It baffles me that NPM for so long couldn't replicate the same guardrails as Maven Central.
How does that protect against credential theft? MFA required to sign published releases?
That is another important layer. Maven Central is not immune to credential theft. If a publisher token is stolen, an attacker may still be able to publish a malicious new version until the token is revoked or the account is suspended after reporting the problem to Sonatype.
But in the Maven/Gradle ecosystem, most projects pin exact dependency versions. Support for version ranges and dynamic versions exist, but they are generally avoided because they hurt reproducible builds. That means a malicious new release does not automatically flow into most consumers’ builds just because it was published.
I'd go as far to say that NPM should:
1. Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).
2. Disable version range support out of the box. User must --enable this setting from the command line at all times.
3. Remove support for install scripts completely. If someone wants to publish a ready-to-run software, there are plenty of other mechanisms.
> Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).
Who the heck says everyone who publishes a library has a domain? That seems absurd.
And domains can change hands legitimately.
Sonatype allows "io.github.<username>" as a valid groupId and has a process to verify ownership. I am sure other providers like GitLab can work on this.
You're missing the biggest root cause though, and that significantly hinders how well this translates between languages: the Java community has settled on fewer but large monolithic dependencies, whereas the JavaScript community has settled on many but small composable dependencies (for good historical reasons, but that's a topic in and off itself).
This directly influences how well e.g. version pinning works. In the Java world, package versions are _relatively_ independent from eachother and have few transitive dependencies, and as such version conflicts are relatively rare. This means you can get away with full pinning of all dependencies, with the occasional manual override of a conflicting transitive dependency.
This doesn't work in JavaScript. The dependency ecosystem is massively intertwined, if every library would specify exact versions you'd end up with literally hundreds of conflicts to resolve. That's not feasible. As a result, they've chosen the middle ground of using lock files in addition to version ranges.
This also hurts the effectiveness of verified namespaces: when packages come from hundreds of different sources, you're not going to notice 1 or 2 sketchy ones in there.
Other consequences of the big monolithic packages in Java are that updates tend to be less frequent, and more often from large reputable venders. Both of these help to reduce the problem too.
While the JavaScript toolchain can definitely learn a lot from the Java toolchains, the problems it needs to solve are not the same, and thus solutions don't translate 1-1.
At least I hope that they'll get rid of install scripts, that's such a low hanging fruit that really should've be done a decade ago.
> At least I hope that they'll get rid of install scripts, that's such a low hanging fruit that really should've be done a decade ago.
How will that help? It's just going to break things that legitimately require them.
Instead of being infected upon running "npm install", you'll just get infected upon running "npm run" instead. The former is slightly more reliable but fixing that is just kicking the can down the road. Maybe we'll have a few days before the payloads get rewritten.
Also....
Maven doesn't have "preinstall, install, post install", or " build.rs" for rust, executing arbitrary code during the installation.
The code that's executing with Maven is in your pom.xml, not some hidden code from a transient dependency.
That alone is a major design flaw in both npm and cargo.
Java is boring, because it works. People don't like boring stuff. It's more exciting to play the Russian roulette on each install!
none. they just have smaller target populations.
Part of the point the article makes is that most other popular languages have a comprehensive standard library. JS has an astonishingly small on. Rather than have one vetted set of libraries that ship with the language, applications either need to roll it themselves or pull from a 3rd party package repository. We've drilled NIH into people, so they tend to reach for packages. That's not necessarily a bad thing, but it often means they're pulling in more code than they need. The JS ecosystem has also favored smaller modules, so you need many of them. And everyone builds on top of that, leading to massive growth in dependency graphs. It's a huge surface area for things to go wrong, intentionally or not.
With many other languages, you have a lot of functionality out of the box. Certainly, there have been bugs and security issues, but they're a drop in the bucket compared to what you see in the JS ecosystem. With other languages, you have a much smaller external dependency graph and the core functionality is coming from a trusted 3rd party.
Why Python, tho, in that case? Its stdlib is quite robust. Surprisingly so in some areas.
I'm not convinced that Python should be the standard for package management either. Earlier this week I was trying to publish a Python package for the first time wrapping a Rust library I wrote (for use only on Linux and Python 3.12+), and it literally took me hours to get from "I have a wheel that I can import and it works on my system" to "I have published that wheel and can install the package from PyPI on the set of systems that I'm trying to support and it actually works". Everything I've heard about this indicates that the situation for Python packaging is literally better than it ever has been before with the current tooling, so I can't even imagine how bad it was for the decades before. In comparison, having literally never touched npm before, I was able to publish a wrapper around the same library and validate that it was working in maybe 10 minutes (most of which were spent from not realizing that a certain tool was failing with a vague "file not found" error because I hadn't installed npm yet).
I'm not saying that npm is doing everything right, but I suspect that beyond the obvious low-hanging fruit that we hear about pretty consistently with npm there's probably a long tail of less obvious stuff that can be exploited that will not be specific to npm. The fundamental problems with supply-chain vulnerabilities aren't going to go away if npm magically became pip or go modules overnight.
What important functionality do you feel is missing from the commonly used JS environments (node and browser) that is causing people to install it as a third party dependency?
The issue isn’t that the functionality doesn’t exist, it’s always backwards compatibility with versions where it did not yet exist.
> Part of the point the article makes is that most other popular languages have a comprehensive standard library.
Both the Browser and Node.js standard library are fairly extensive. I don't think there's much you can do with other language you can't do with Node.js. And as a lot of newer languages have demonstrated (like zig and hare), you don't need an extensive one.
It used to be true. The early days of node were pretty paltry. I think a lot of developers and projects have picked up these dependencies by habit and accretion and have never factored them out.
My pet peeve is when a developer picks up a library for just a few lines of code, and it turns that this library picks up another one that's not even relevant to its core domain. Whenever you get to the leaves of the dependency tree, it usually turns into a joke. Byte sized libraries everywhere.
Like you have axios.js that decides in turn to depends on the "follow-redirects" library. IMO, the best move would be for axios to vendor the code. Same with "proxy-from-env" Just tiny libraries scattered all over the web. Something like axios, should purely depends on the runtime library.
"What are the actual guarantees that <guy leaving his keys on his dashboard> make that <guy leaving his keys on an illuminated blinking sign outside his house> don't make?"
There has been a lot of pain at my various jobs installing a safe global npm config on every developer machine, asking people not to disable it, checking it with mdm tools. A safer out-of-the-box configuration is long overdue.
Just dont use npm. Use a package manager which doesn't execute postinstall by default. The switch is incredibly simple.
Which package manager is that, and what caveats does it offer?
Pnpm - installs are faster to boot. We haven’t missed anything
pnpm
What do you mean by safe config? If you're trying to mandate a cooldown period or a whitelist/blacklist of packages, the correct approach is to configure a company-controlled registry that pulls from the upstream npm registry while enforcing your desired policies.
There is no legitimate reason why postinstall scripts need to exist. The npm team needs to grow up and declare "starting with npm version whatever, npm will only run postinstall scripts for versions of packages published before ${today}".
This doesn't really fix the issue though because package code is also executed at build time and during testing. Just maybe restricts the scope a little bit.
If you look at the last N npm worms, they all used postinstall scripts.
Is that even true?
shai-hulud and variants
https://www.stepsecurity.io/blog/mini-shai-hulud-is-back-a-s...
So N=1? 2? 3?
at least 3 that i can remember off the top my head in these last couple months. If you look further back you will find more.
There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
Its childish to believe that because you can't fix everything you shouldn't fix anything. Defense in depth.
> There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package)
You don't need to test a compromised package to have it execute code. Importing it anywhere in your tests is enough, even transitively.
It's for sure less likely to run but I doubt it's significantly different in practice.
install scripts are a distraction, just like package signatures are a distraction. adding/removing either feature has no significant impact on the wormability of this package ecosystem. installed npm code is run, with nearly zero exceptions.
Surely every layer of defense in depth is a distraction except the one that prevents the problem.
A lot of it ends up bundled to run in a browser though, and doesn't end up running in Node.js
The installed code may be run in different settings, under a different user, with different privileges. Say, it may not run in CI/CD at all, or run only with the test user's privileges.
Postinstall scripts run at install time, with installer's privileges.
> There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
...and only if you invoke it with --dangerously-run-postinstall-scripts; otherwise it will report an error if a postinstall script is found.
This is definitely going to affect any packages that need to link to native code and/or compile shims, but these are very few.
Security issues aside, they are a nightmare in enterprise environments where internet and OS access is heavily restricted.
There is also not too much legitimacy to the fact that Rust packages can run unsandboxed when they build themselves.
I feel like it's harder to hide malicious stuff in Rust build scripts.
With respect, post-install scripts are a total red herring. You're alarmed by them because they are code controlled by someone else that runs on your box, and they could do something bad -- yes, they are, and yes they could.
But so is the regular code in those packages! It won't run at install time, but something in there will run -- otherwise it wouldn't have been included in the dependencies.
Thinking that eliminating post-install scripts will have more than a momentary impact on exploitation rates is a sign of not thinking the issue through. Unfortunately the issue is much more nuanced than TFA implies -- it's not at all a case of "Let's just stop putting the wings-fall-off button next to the light switch", it's that the thing we want to prevent (other people's bad code running on our box) cannot be distinguished from the thing we want (other people's good code running on our box) without a whole lot of painstaking manual effort, and avoiding painstaking manual effort is the only reason we even consider running other people's code in the first place.
The time difference does matter though. There were some recent worm attacks in NPM that spread very quickly because they used post-install. I don’t remember how long it took NPM to block the packages but it was probably around 30 minutes or so? If it wasn’t for post-install then that same attack would have a much slower spread and thus a smaller blast radius.
> There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
I audited several postinstall scripts recently in popular packages. They seem to be mostly around using native binaries, downloading them, detecting if the platform is compatible, linking to it directly instead of having it bootstrapped by node, working around issues in older versions of npm, etc. Since dev toolchains (e.g. esbuild) are now being built in compiled languages and distributed as binaries via npm registry. If you are on a recent version of node/npm and a common/recent OS/platform, you should be able to disable all the postinstall scripts without legitimate issue.
Thoughts and Prayers to those affected
We wish them well.
I’m using nix for managing npm dependencies in a project and it seems like I accidentally got some protection from these attacks because of the nix sandbox. Looks like I got more than I begged for.
I use C++ and Conan with my own recipes and pre-built artifacts.
This mitigates things to a great extent.
I do not know who thought that having your dependencies depend on the internet with a zillion users doing stuff to each package was a good idea for enterprise environments...
It is crazy how much things can get endangered this way.
It's a cultural issue, always feeling the urge to update to the newest possible package for things that are already working fine, without even reading the changelog to see if it's applicable. Cooldowns are only a way to force a bit of patience onto the maintainers... and they work.
That, and package owners updating stuff that needs no updating just to look not stale/unupdated. I can use lisp packages without changes for 15 years fine, but a js one is unmaintained! oh no! Even though it was done 15 years ago, so they add nothing, sometimes a breaking change, to up a version on npm and github and look maintained. And then everything will update.
Kudos to the author : this article read like something out of The Onion.
Ah yes, only `npm` has ever suffered an attack. Ever.
RubyGems: https://www.sonatype.com/blog/anatomy-of-the-rubygems-rest-c... PyPi: literally the latest attack included publishing malicious packages on PyPi XZ Tools, a part of nearly every Linux distribution nearly merged in code to backdoor SSH: https://www.akamai.com/blog/security-research/critical-linux...
It is just easy pickings to blame npm specifically. Yes, while they do share some part of the blame, no package manager is immune from attack and certainly not ones where the attackers exploited being able to extract out secrets from a developer's environment variables or files. Seems more like developers should be managing their secrets better?
I also find that using the meme that this title snowclones is in bad taste too.
Security doesn't exist in absolute. It's about relative effort. Exploiting Debian's package management requires quite a bit of effort, NPM, while being funded by Microsoft, only need to have a token stolen. And postinstall scripts were decried as a security risk for a long time
With the recent high-profile attacks on PyPI packages, it’s no longer true that npm is the “only package manager where this regularly happens”.
In fact, pip is much more dangerous than npm because it lacks a lockfile. uv fixes that, but adoption is proceeding at a snail’s pace.
UV adoption is happening, though. NPM is still the only name in town.
Huh ? uv is a package manager not a registry.
In JS world there is plenty of competition for package managers pnpm/ yarn/ burn all viable alternatives to npm the package manager.
Public registries for languages tend to coalesce around one service . Nobody wants to publish their library to 4 different registries .
I don't know about snails, but everything I'm in contact with has moved over to uv, and I can't imagine I'm the only one.
Apparently it does now: https://packaging.python.org/en/latest/specifications/pylock...
https://pip.pypa.io/en/stable/cli/pip_lock/
But who cares about pip, uv is here.
I think people are overlooking the fact that the javascript ecosystem is run by perpetual beginners who are probably using 5 different SAAS credential managers and still manage to check their creds into a public git repo. No wonder there are so many breaches. Rust developers otoh are typically experts and don't get pwned so easily.
No surprise here. That's what you get when you have a language/ecosystem where core devs refuse to fix fundamental flaws, cuz for them breaking backwards compatibility is the worse crime that can ever be committed. And so all that happens in JS-land will eternally be layering lipstick on the pig in the cesspool. Too afraid of going through something similar to the Python 2 -> 3 fiasco, I guess because too many web devs and site admins would be incensed at being forced to fix their broken universe; as if it isn't already broken in its current condition.
I really don't understand why the npm project cannot embrace PGP as an ambulatory 'good enough' solution.
The NIH mentality in the ecosystem would result in a JavaScript pgp library which itself would be an npm package and subject to supply chain attacks. lol.
A good part of it is already implemented in web crypto, which is supported by browsers and node. There is a chance that npm could implement something there without extra dependencies. Maybe I'm too optimistic?
Would that help? Most of these recent attacks, the attackers have gained access to the system that builds the packages. So it would have just signed the malicious build the same.
nope, doesn't help. signatures and removal of script points have zero net effect on the value of the target that the ecosystem has, or how easy/hard it is to write a worm. the package code gets run, this is statistically true, and the exploited developers/environments will sign packages, this is also statistically true.
Probably the same reason that pretty much no other package manager (or even major email provider, when email is ostensibly the most famous use-case for it) has adopted it: the UX is atrocious.
The answer is LLM inspection. Which, sadly, raises the cost of software, especially once evil LLMs start hiding the backdoors better. Long term the answer should be CHERI, in my opinion.
These satire articles on cybersecurity are really entertaining.
The other one a few days ago was also good: https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes...
Do not fucking use npm. Stay the fuck away from it. Want to write JS? AI can now write vanilla JS for you with no libraries. Own your code.
...so far...
Vendorizing using git submodule should be a robust mitigation for this problem.
subtree is better for this case, you want to encourage actual reading before running. reading won't catch everything but it catches a lot, and the burden isn't as high as people always complain about before they try it.
This feels like the modern analog of the king, the mice, and the cheese. What cats do I need to bring in to eat my git submodules?