Though this outage may be more related to the copy.fail upgrade cycle, it reminds me of a thought I've had recently in respect of agents.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
That let me think: I think I have never compiled af_alg in any of my linux kernels.
Now, I worry about the linux user mount namespace code... because I run the steam client which valve forces people to have in their kernel because they don't want/know how to craft "correct" ELF64 binaries, namely "-static-libgcc -static-libstdc++" compiling/linking options, maximizing static linking refactoring a bit source code with the pre-processor to avoid symbol collisions.
I daresay you could find the odd example, as for any grid in a stressed situation, but it's not like we turn to each other every week in the dark and say "Oh, it must be half time at the Manchester United match".
I had the same impulse (or at least copy.fail inducing many to upgrade at the same time.) However, it might be a "pro-Iran hacktivist group" according to
Maybe they could use this DDoS attack as their 17th round technical interview. Any candidate who successfully mitigates the attack would then make it to the 18th round. Win win!
I got all the way to round 53, but it turned out that one of my semiaquatic tetrapod ancestors from the Carboniferous Period didn't perform on land as well as they would have liked, so that was it for me.
While the timing with the copy.fail patches mentioned by a few comments here seems suspicious indeed, I have seen this repeating over the last few weeks: packages.ubuntu.com was hardly reachable on some days, causing apt-get to take forever to update the system. They have been struggling hard recently, it seems.
Best of luck to the people having to deal with this mess on a holiday!
The point of coincidental timing with copy.fail patches is that by DDoSing an upgrade mechanism for one of most popular distributions, you extend the time window certain systems remain vulnerable in order to exploit them.
Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
Yeah you need native code execution, and if you have AF_ALG access there is clearly no sandboxing in place. At that point it's game over on Linux, there are too many bugs. Even if you fix all the known ones in the current kernel, by the time the version with those fixes is qualified and released (not to mention, the machine must reboot), new LPEs have been discovered.
Look at the CVE database. Most of those UAFs are LPE. Many of the OOBs and many of the race conditions too.
Then look at the KASAN reports on the syzkaller dashboard. Many of them are LPE.
Then try pointing your LLM at the codebase and saying "find an LPE". It will find as many as you want (you will exhaust your tokens long before it stops finding bugs). 99.99% of them will be bogus so you need a way to evaluate them at scale, currently this is the weakest approach but we'll get better at it.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
It's not RCE it's an LPE in an obscure corner of the kernel attack surface that no sensible application depends on. They are absolutely a dime a dozen.
Even just in AF_ALG there have been several such vulns fixed in 2026 already. Kernel wide probably hundreds. It's true that most of them will be harder to exploit than this one but that just means you need to prompt your AI a bit harder to get an exploit. (To be fair, in a lot of cases it's gonna be hard to escalate privs without crashing the machine).
Ubuntu has userns restrictions now which takes away the main sources of LPEs (random qdiscs, nftables, all that garbage) but there are still huge numbers of these vulns.
This is why platforms that do native untrusted code executions have extreme sandboxing. Note Android and ChromeOS aren't affected coz they already knew this code was broken and hide it from unpriv workloads.
You can't run untrusted code on Linux without either a very very carefully designed sandboxing layer (like Android/ChromeOS) or virtualization. copy.fail is just one among tens of thousands of reasons for this, and it's a pretty uninteresting one at that.
What is "special" depends on your usecase but for my job it's mostly about stuff that's exposed to KVM guests. Biggest source of concerning vulns for us is probably vhost. I expect there are also lots of undiscovered and scary vulns in places like virtiofs, vfio, DAX, and wherever we do device passthrough.
> I could find any places running containered services and exfiltrate secrets parallel services, no?
Yes. Regardless of copy.fail. Cloud providers don't do that without a VM layer. (If yours does, you need to switch).
The cope of some people is insane. Why even have UID:GID? All you need is 0:0. I always tell people to run everything as root because there is literally no point.
They're not dime a dozen exactly but LPE bugs in Linux (and common Linux distros) are easily common enough that nobody sane relies on user isolation as a serious security boundary.
Clouds use VMs as the security barrier, which is also not always 100% perfect, but is much better.
It could be useful as part of an exploit chain but generally once you've got to local code execution it's not going to be difficult to get further.
A "special" bug would be something that defeats a security barrier that people actually use, e.g. something that works remotely, or as you say - a hypervisor hack.
My mind immediately went to chaining this with another recent vulnerability in the Ninja Forms - File Upload plugin [0]
> This makes it possible for unauthenticated attackers to upload arbitrary files on the affected site's server which may make remote code execution possible.
So, upload and execute a script that loads Copy Fail and even if you're only executing as www-data or another restricted user that "can't" sudo -- suddenly, uid=0!
Yes but what I'm saying is that copy.fail is a minor detail in this scenario.
If you are running Ninja Forums you need to run it in its own VM so that if it gets compromised _you don't care if it has uid=0_.
You need to do that regardless of copy.fail. Now that you've patched copy.fail, there are loads and loads of other vulns that can be used the same way.
This seems to be pretty targeted, and with the services affected like livepatch and such this could indeed be an actor DDoSing to avoid patches rolling out for copy.fail
Snap is mostly limited to Ubuntu and has to run as a daemon.
Flatpak gives me cross-platform/cross-distro software directly/certified by the project or company that has additional security sandboxing and doesn't open up potential security issues.
I don't have to wait for a distro package, and yet there are no system integration concerns.
It also works great for atomic distros (SilverBlue, etc)
I got rid of both and my system is much better for it. The only thing I still use that is distributed in such a format is AppImage, and mainly because it has never given me trouble.
It's almost certainly related to preventing the roll out of copy.fail fixes. Someone held the capability in reserve until they had a good reason to use it.
Anyone from Canonical shared any pcaps of the attack yet? Or perhaps a summary of packet types, sizes, payloads, TCP/IP header characteristics? State table statistics?
Could this DDoS be affecting some components of https://ppa.launchpadcontent.net? I know they're supposed to be down and up again, but I still get errors when I update Ubuntu :(
Though this outage may be more related to the copy.fail upgrade cycle, it reminds me of a thought I've had recently in respect of agents.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
We're at the stage where we blame AI for anything as a first reaction?
(Love the tv pickup story. I also thought of that, in other situations)
Indeed. It is far more likely to be the copyfail issue.
I wasn't blaming this issue on that in particular, just making an more general observation in line with the post. I'll make that clearer.
Well, that and the rush to upgrade for copy.fail.
Has Ubuntu published patches yet?
Yes, but I can currently only load the page about them via the Wayback Machine: https://web.archive.org/web/20260430191621/https://ubuntu.co...
Patch published to disable the affected module. No patch for the module itself yet.
That let me think: I think I have never compiled af_alg in any of my linux kernels.
Now, I worry about the linux user mount namespace code... because I run the steam client which valve forces people to have in their kernel because they don't want/know how to craft "correct" ELF64 binaries, namely "-static-libgcc -static-libstdc++" compiling/linking options, maximizing static linking refactoring a bit source code with the pre-processor to avoid symbol collisions.
In the US we have the Super Bowl Flush: https://medium.com/nycwater/the-big-flush-on-super-bowl-sund...
It's literally the plot of https://en.wikipedia.org/wiki/Flushed_Away
> leads to real outages.
Um, no.
I daresay you could find the odd example, as for any grid in a stressed situation, but it's not like we turn to each other every week in the dark and say "Oh, it must be half time at the Manchester United match".
I had the same impulse (or at least copy.fail inducing many to upgrade at the same time.) However, it might be a "pro-Iran hacktivist group" according to
https://www.theregister.com/2026/05/01/canonical_confirms_ub...
"Canonical says its web infrastructure is under attack after a pro-Iran hacktivist group instructed its members to target the open source giant."
Perhaps more to do with extortion rather than activism. (I have no idea how accurate theregister is on this story.)
It appears to be a pro-Islamic Republic of Iran DDoS crew
https://news.ycombinator.com/item?id=47975729
Maybe they could use this DDoS attack as their 17th round technical interview. Any candidate who successfully mitigates the attack would then make it to the 18th round. Win win!
Do they finally meet a human being with an explanation on the position on the 18th round?
Depends on their high school GPA.
I did really well in Kindergarten, so I made it to the 22nd round.
They told me my grandpa was too dumb at round 47. I felt like I was close.
I got all the way to round 53, but it turned out that one of my semiaquatic tetrapod ancestors from the Carboniferous Period didn't perform on land as well as they would have liked, so that was it for me.
While the timing with the copy.fail patches mentioned by a few comments here seems suspicious indeed, I have seen this repeating over the last few weeks: packages.ubuntu.com was hardly reachable on some days, causing apt-get to take forever to update the system. They have been struggling hard recently, it seems. Best of luck to the people having to deal with this mess on a holiday!
The point of coincidental timing with copy.fail patches is that by DDoSing an upgrade mechanism for one of most popular distributions, you extend the time window certain systems remain vulnerable in order to exploit them.
The point is that these apparent DDoSes have been going on for weeks, they are not necessarily related to copy.fail.
Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
s/competitor/intelligence services/
+1, it hasnt even been 24 hours and I already see these stupid CyberSec companies trying to squeeze themselves between this.
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
The plot thickens...
you are the center of all this, I knew it.
why a competitor? Criminals, secret services, country adversaries...
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
I thought copy.fail is a privelage escalation exploit, become root from a regular user? Am I missing something?
How would "node architecture" make people vulnerable to this?
You have to have shell access to a victim first right? Or am I missing something?
Yeah you need native code execution, and if you have AF_ALG access there is clearly no sandboxing in place. At that point it's game over on Linux, there are too many bugs. Even if you fix all the known ones in the current kernel, by the time the version with those fixes is qualified and released (not to mention, the machine must reboot), new LPEs have been discovered.
To convince me Linux is full of kernel LPE bugs, can you share some of the bugs?
https://gtfobins.org
Look at the CVE database. Most of those UAFs are LPE. Many of the OOBs and many of the race conditions too.
Then look at the KASAN reports on the syzkaller dashboard. Many of them are LPE.
Then try pointing your LLM at the codebase and saying "find an LPE". It will find as many as you want (you will exhaust your tokens long before it stops finding bugs). 99.99% of them will be bogus so you need a way to evaluate them at scale, currently this is the weakest approach but we'll get better at it.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
It's not RCE it's an LPE in an obscure corner of the kernel attack surface that no sensible application depends on. They are absolutely a dime a dozen.
Even just in AF_ALG there have been several such vulns fixed in 2026 already. Kernel wide probably hundreds. It's true that most of them will be harder to exploit than this one but that just means you need to prompt your AI a bit harder to get an exploit. (To be fair, in a lot of cases it's gonna be hard to escalate privs without crashing the machine).
Ubuntu has userns restrictions now which takes away the main sources of LPEs (random qdiscs, nftables, all that garbage) but there are still huge numbers of these vulns. This is why platforms that do native untrusted code executions have extreme sandboxing. Note Android and ChromeOS aren't affected coz they already knew this code was broken and hide it from unpriv workloads.
You can't run untrusted code on Linux without either a very very carefully designed sandboxing layer (like Android/ChromeOS) or virtualization. copy.fail is just one among tens of thousands of reasons for this, and it's a pretty uninteresting one at that.
What is "special" depends on your usecase but for my job it's mostly about stuff that's exposed to KVM guests. Biggest source of concerning vulns for us is probably vhost. I expect there are also lots of undiscovered and scary vulns in places like virtiofs, vfio, DAX, and wherever we do device passthrough.
> I could find any places running containered services and exfiltrate secrets parallel services, no?
Yes. Regardless of copy.fail. Cloud providers don't do that without a VM layer. (If yours does, you need to switch).
The cope of some people is insane. Why even have UID:GID? All you need is 0:0. I always tell people to run everything as root because there is literally no point.
They're not dime a dozen exactly but LPE bugs in Linux (and common Linux distros) are easily common enough that nobody sane relies on user isolation as a serious security boundary.
Clouds use VMs as the security barrier, which is also not always 100% perfect, but is much better.
It could be useful as part of an exploit chain but generally once you've got to local code execution it's not going to be difficult to get further.
A "special" bug would be something that defeats a security barrier that people actually use, e.g. something that works remotely, or as you say - a hypervisor hack.
My mind immediately went to chaining this with another recent vulnerability in the Ninja Forms - File Upload plugin [0]
> This makes it possible for unauthenticated attackers to upload arbitrary files on the affected site's server which may make remote code execution possible.
So, upload and execute a script that loads Copy Fail and even if you're only executing as www-data or another restricted user that "can't" sudo -- suddenly, uid=0!
To repeat the refrain... I'm so tired.
[0] https://www.wordfence.com/blog/2026/04/attackers-actively-ex...
Yes but what I'm saying is that copy.fail is a minor detail in this scenario.
If you are running Ninja Forums you need to run it in its own VM so that if it gets compromised _you don't care if it has uid=0_.
You need to do that regardless of copy.fail. Now that you've patched copy.fail, there are loads and loads of other vulns that can be used the same way.
It isn’t a competitor it is Iran.
Related ongoing thread:
Pro-Iran crew turns DDoS into shakedown as Ubuntu.com stays down - https://news.ycombinator.com/item?id=47975729 - May 2026 (59 comments)
This seems to be pretty targeted, and with the services affected like livepatch and such this could indeed be an actor DDoSing to avoid patches rolling out for copy.fail
Noticed it because snap didn't work, snap has its own status page just fyi: https://status.snapcraft.io/
Frustrating because the Slack snap is broken so every day you have to downgrade it and I guess you can't without connectivity.
This might be the incentive I need to finally purge snap.
Just move to flatpak, much nicer to deal with
In my testing I find the exact reverse. I much prefer snap to flatpak.
Snap is mostly limited to Ubuntu and has to run as a daemon.
Flatpak gives me cross-platform/cross-distro software directly/certified by the project or company that has additional security sandboxing and doesn't open up potential security issues.
I don't have to wait for a distro package, and yet there are no system integration concerns.
It also works great for atomic distros (SilverBlue, etc)
snaps work on anything that has systemd so I don't quite know where you got the idea is that it is mostly limited to ubuntu.
I got rid of both and my system is much better for it. The only thing I still use that is distributed in such a format is AppImage, and mainly because it has never given me trouble.
Both fail hard for so many things. If you need any sort of hardware acceleration, just use an rpm/deb.
Snap recently got much more polished.
I used to have to find a script to purge excess old snaps that would fill up my hard drive. Now Ubuntu only keeps two versions of each snap.
I was wondering why the script didn't have to ever clean more than one version, even when I took longer between running updates.
We are so broken as society ddos'n ubuntu is now a thing.
It's almost certainly related to preventing the roll out of copy.fail fixes. Someone held the capability in reserve until they had a good reason to use it.
Explains why I needed to torrent Ubuntu 26.04 today. Even navigating to the alternative/mirror page to grab the torrent file was painful.
Anyone from Canonical shared any pcaps of the attack yet? Or perhaps a summary of packet types, sizes, payloads, TCP/IP header characteristics? State table statistics?
I like to imagine it's returning a 500 error response asking you to email rhonda@ubuntu.com
Could this DDoS be affecting some components of https://ppa.launchpadcontent.net? I know they're supposed to be down and up again, but I still get errors when I update Ubuntu :(
No mitigation can stop Aisuru. Let's hope it's not that because the only end in sight is them getting bored and moving on to the next victim.