For people wondering whether to migrate now: the practical question isn't "is a CRQC imminent" (it isn't), it's whether your encrypted messages have a useful lifetime longer than the optimistic deployment timeline.
If you encrypt a one-off email with a 5-year confidentiality requirement, harvest-now-decrypt-later actually matters. If you're encrypting backups that get rotated every 90 days, it doesn't.
The hybrid construction (Kyber/ML-KEM + X25519) is nice precisely because it's a no-regret move — you don't lose anything by adopting early. If Kyber turns out to have a structural flaw, X25519 still protects you. If a CRQC arrives, ML-KEM still protects you. The only real cost is key/ciphertext size, which for OpenPGP isn't a hot path anyway.
The interesting question is what happens to long-lived smartcard/HSM-backed keys. Those typically have a 5–10 year lifecycle and most hardware won't grow ML-KEM support without a hardware refresh. That's where I'd expect the first real compatibility headaches.
Some Hardware Security Module manufacturers were smart enough to include FPGAs in their products, which they can now use to accelerate PQC algorithms without a hardware refresh.
The trouble is that PQC already has inherent size/performance downsides, and it won't benefit from the decades of optimizations that classical algorithms had. Expect a hefty performance tax for some time.
> introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption
algorithm
Funny to read 1-liner changelog versus the plethora of articles just few years ago along the line of "Quantum computer, it might just change our entire lives and make privacy impossible!".
The simple addition (of a not so simple algorithm) to the software (and few others, e.g. OpenSSL) and voila, me can move on with our daily lives. Cryptography and computational complexity are truly amazing.
It reminds me a lot of Y2K. The fix is simple, but finding the places where it's needed and doing it in a compatible way are absolutely non-trivial problems. The best we can hope is the same as Y2K: the plethora of articles convince businesses to invest large amounts of money to migrate algorithms, so that when a quantum computer arrives it won't be a big deal.
This isn't a space I know too much about, but even if we all start using quantum-safe encryption for everything today, won't the arrival of quantum computers that can break traditional encryption not still be a big deal?
Given that intelligence agencies, tech companies and various bad actors have been storing encrypted data for a long time, hoping to decrypt when (if?) that day comes?
Definitely, but then the damage is limited to the encrypted data that those actors managed to intercept some years before. Compared to QC arriving to an unprepared world, that's a very limited impact.
Intelligence agencies and companies for which industrial espionage is an actual concern will re-encrypt their data storage, or have already done so. The only risk is on data that was already obtained with a vulnerable encryption. So there is some risk that a few secrets are lost, but it won’t be everything. And if you were to start now and quantum decryption isn’t viable for a decade then any secrets that do get exposed are surely less of a problem than if they were discovered today.
Sure it's still a big deal but it's not as if suddenly everybody get a quantum computer and can use it nilly-willy. It will be (or is) scarce enough that information has to be selected as critical in order to be deciphered a posteriori.
The time between the moment the information is recorded and when it's deciphered is what matters, rarely the information itself abstracted from all context.
So even if suddenly having a classical cryptography is broken, trivially, then there still need to be a way to search through it.
Typically for a random person that means their credit card pin and their email password for example. Well, you chance that and if, say the NSA, can decipher your old email password even 1 minute after you changed it, no big deal. If they can decipher your old emails it might be a big deal but probably not. I would argue it depends on actionable information (e.g. a coup happening tomorrow) and legal information (e.g. the proof that a certain person was an informant and should be extradited).
So... I would argue historically, huge deal, daily life... probably not much for most.
Yes. Both standards proposals have SHA256 fingerprints.
Not that there is anything wrong with SHA1 fingerprints in practice. The sort of collisions that SHA1 is susceptible to are not an issue in this particular application. With SHA256 fingerprints people would still be using 64 bit key IDs, just like they are doing now.
I don't know enough about either the technical nuance or the political drama, but some observers have noted that GnuPG's implementation is (deliberately?) incompatible with the IETF's standards. It's not clear why.
As far as I understood it: GnuPG started to implement stuff from the standard before it was finished, the standard continued to improve and GnuPG refused to change code already written.
it's not that simple. the new standard is a complete rewrite of the old one. they are not even compatible anymore. things the old standard used to support are not supported in the new standard. that makes any implementation of the new standard incompatible with implementations of the old one. GnuPG simply refused to stop supporting the old standard and decided to fork the standard itself. on the personal drama my interpretation is that it resulted from people backing the new standard being unhappy that GnuPG didn't go along.
my opinion is that rewriting standards like that is the result of design by committee. everyone wants to put their mark on it. designing a new standard is fine, but the new standard should have also received a new name, or it should at least have been acknowledged that the old standard still needs to be supported until enough time has passed that the old standard is no longer in use. (which could take decades if not more if we want to be realistic and consider that encrypted data at rest could linger around pretty much forever unless actively re-encoded.)
LibrePGP is also a rewrite. To keep supporting legacy v4 you have to keep having v4 code no matter if the new thing you add is v5 (LibrePGP) or V6 (the RFC)
actually neither are complete rewrites. i played around with diff and found that the new version of OpenPGP seems to keep about 60% of the old one and LibrePGP seems to keep 90%.
so the rewrite claim was exaggerated. i didn't compare the stuff that was added or merged.
> the new standard is a complete rewrite of the old one. they are not even compatible anymore.
My honest first reaction to this statement would get me permabanned from this site, so here’s the polite version:
This is nonsense on stilts. It is so ill-informed and baseless I struggle to understand how anyone who has read the RFCs in question could possibly come to this conclusion. It is hooey.
> things the old standard used to support are not supported in the new standard.
Aside from deprecating some ancient cryptographic algorithms that nobody uses any more, everything from RFC4880 is in RFC9580. Can you point out a concrete example of something (non-obsolete!) that is missing?
> that makes any implementation of the new standard incompatible with implementations of the old one.
That is news to every openpgp implementation other than gnupg, which have happily implemented both. Even RNP have it in a feature branch somewhere.
> (source: i talked to a GnuPG developer)
Which one? When? It would genuinely help if they would go on the record. I strongly suspect their actual opinion would differ from what you’ve reported here. There’s enough hearsay nonsense about the schism floating around the internet as it is, without adding to it.
From the GnuPG prospective RFC-9580 is a deliberate fork away from what agreement could be achieved. Basically the faction that is now called RFC-9580 (mostly Sequoia and Proton) wanted to make a lot of changes to the existing standard but the faction that is now called LibrePGP (mostly GnuPG and RNP) was not convinced that those changes were necessary.
Traditionally the OpenPGP standards process has been very conservative and minimalistic. GnuPG comes from that tradition. So the RFC-9580 faction created their own maximalist version of the standard and are actively promoting it as the standard.
So from a user perspective, there are two incompatible proposals out there. It's a mess. So it is better to aggressively ignore them both and maintain interoperability by sticking with RFC-4880 (OpenPGP). That might be a problem if you for some reason are still concerned about a quantum attack against cryptography as the post quantum stuff has gotten caught in this schism. It is certainly something that the users need to keep in mind.
It is a standard proposal, which is why it's in the standards track. The point was that it is not the only (the) standard, and not the universally accepted one.
- As a practical matter, anything that is a Proposed Standard RFC is a standard. In principle, there is a two-level system with PS and Internet Standard (down from three levels) but most WGs don't bother to advance specifications past PS. For example, TLS and QUIC are both PS.
- RFC 9580 obsoletes RFC 4880, so from the perspective of the IETF, it supersedes it. Of course, this doesn't make people do anything.
It is very hard to prevent a proposal from becoming a RFC. You have to generate ongoing opposition for longer than the supporters. FWIW, here is the LibrePGP proposal:
Observing the OpenPGP schism mess I think I have gained some insight as to why some RFCs become so bloated. For example it has been recently pointed out that there are 60 RFCs for TLS (with 31 drafts in progress)[1]. The RFC process seems to be more optimal during the design phase. Once we have an established standard there should to be some way to force those that propose changes/extensions to provide appropriately strong justifications for those changes/extensions. Right now it is a popularity contest and there will always be more people out there in favour of changes/extensions than those willing to endlessly fight against those changes/extensions. Because cryptography is so specialized and obscure, the users tend to get left out of the discussion.
And anyone can put forward a draft. Here's one for "IPv8" with increased security where "manageable element in an IPv8 network is authorised via OAuth2 JWT tokens"
> It is very hard to prevent a proposal from becoming a RFC. You have to generate ongoing opposition for longer than the supporters.
I don't think this is really true. A huge fraction of proposed documents just go nowhere, and it's really quite common to see a new proposal get presented and be shot down by one or two people (Source: I've been one of the people doing the shooting down on more than one occasion)
The situation is farcical, and stems from the double bind that PGP has been in for at least 20 years: the standards are bad and need modernization, but it’s impossible to modernize them because the single thing that retains “serious” users of PGP is backwards compatibility.
The end result of this is a version of Weekend at Bernie’s where both GPG and OpenPGP are fighting over how to dress up the corpse, while the rest of the world has moved on.
> The end result of this is a version of Weekend at Bernie’s where both GPG and OpenPGP are fighting over how to dress up the corpse, while the rest of the world has moved on.
Unfortunately there's something akin to a conflict of interest with both RNP and OpenPGP. OpenPGP guys have gpgsm, and RNP people also maintain the S/MIME part in Thunderbird. Both have stagnated and are holding back what would have otherwise moved on.
PGP covers the case where data is encrypted and might stick around in that state for a long time. Decades. So backwards compatibility is essential.
Fortunately we can use the existing standard (RFC-4880) in a way that is completely secure. Remember, we are talking about the standard that was in effect when the Snowden leak revealed that PGP is on a very short list of things the NSA has no access to. There is no reason to think that has changed since then.
I’m sorry, but it’s beyond the domain of serious discourse to assert that RFC 4880 is “completely secure.” This isn’t a position that even die-hard PGP fans take.
(As just one small example: the only mandatory symmetric cipher in 4880 is 3DES, and nobody serious is recommending 3DES for long term stored encryption in 2026.)
I stated that it was possible to use RFC-4880 in a way that is completely secure, not that every possible use is completely secure.
Your example mentions 3DES. 3DES is secure. The reason it is not recommended is because 128 bit block lengths allow longer file/message lengths than 3DES can accommodate on one key. At any rate, RFC-4880 permits the use of AES and that is what is normally used.
This is incongruous with your original argument: AES is optional, so anybody doing cold storage with PGP on messages they don’t fully control (again, the backwards compatibility story) is going to end up using 3DES.
And no, you can’t brush aside 3DES being insecure for large messages and then call it secure. Modern cryptographic tools don’t allow that, because there is (again) universal consensus that it’s insecure.
Short version: Werner Koch personally hates some people involved with the RFC9580 standardization, and cannot emotionally bear working with anything even loosely associated. He also struggled accepting anyone's opinion but his own while editor of the draft back then.
Search for "asking the editor to step down" to find the moment when the working group decided he was more trouble than it's worth (and GnuPG's support was obviously worth a lot in the openpgp community).
been thinking about this a bit. someone just tell me what algo to use and ill start using it now. are the quantum-resistant cryptos significantly slower?
Basically the idea is use hybrid. AES-GCM-256 or ChaCha20-Poly1305 for symmetric encryption (which is already PQ-safe), and ML-KEM looks set to become the standard for key encapsulation.
ML-KEM-768 is fast as an algorithm, faster than X25519 in terms of pure computation, but uses large keys, so has higher overheads on small payloads. Most of the time, they’re about equal, or the absolute time is so slow it doesn’t matter.
Most folks now are doing hybrid ML-KEM and X25519 to guard against undiscovered flaws in ML-KEM.
For people reading this, you may want to know the the NSA is allegedly trying to weaken hybrid ML-KEM and X25519 down to just ML-KEM. This is a good thing to pay attention to!
is this insinuating that we, collectively, are not 100% confident that ML-KEM on it's own is going to be enough & deduct that the NSA wants the omission of X25519 as sort of a backdoor possibility?
this is great, thanks. i'm a little lost on where I even need to apply this in my own work. for the most part I can think of like a small handful of places where i just symmetrically encrypt at rest, im guessing those should be updated. but for other things, i guess theres going to be a lot of waiting for a platform i dont control for instance to update it's support for things like private/public key authentication and more. i understand openssl supports a lot of these pq methods now, trying to gauge how much of a head start i can reasonably get.
> ChaCha20-Poly1305
ha! i ran into this when looking at the source for yaak (guy who made the insomnia rest client who's now making yaak). i never got to the bottom of how it worked.
> for the most part I can think of like a small handful of places where i just symmetrically encrypt at rest
Current best practices for symmetric encryption are considered PQ-safe (provided your key length is long enough). The real question the above algorithms solve is how do you safely share the key for the symmetric encryption. That’s where X25519 and ML-KEM come in. X25519 is not PQ-safe, but it is very well studied and considered robust. ML-KEM is PQ-safe, but new, and not as well tested/audited.
I believe ML-KEM is the standard algorithm for post-quantum asymmetric encryption. I think it's slower mainly because there's not good hardware support, but it shouldn't be a big deal because most encryption is hybrid where you only use the asymmetric crypto briefly to share a secret you can use for symmetric cryptography.
ML-KEM based on a lattice problem called "Learning With Errors", and there are similar lattice-based algorithms which have no known quantum speedup. Most traditional asymmetric encryption algorithms are based on number-theoretic assumptions like the discrete logarithm problem or the RSA assumption, which are broken by Shor's algorithm.
Symmetric cryptography (AES and SHA hash functions) are post-quantum resistant for now. Grover's algorithm technically cuts their asymptotic security in half, but that doesn't parallelize, so practically there is no known good quantum attack, and cryptographers and standards agencies tend to not worry about that. You can keep using those.
[edit: according to the sister comment posted simulataneously ML-KEM is faster than X25519. good to know!]
For something like PGP, any performance difference wouldn't matter. There is one message and the key agreement is done once. As long as things are fast enough to be imperceptible to the user we are fine.
For people wondering whether to migrate now: the practical question isn't "is a CRQC imminent" (it isn't), it's whether your encrypted messages have a useful lifetime longer than the optimistic deployment timeline.
If you encrypt a one-off email with a 5-year confidentiality requirement, harvest-now-decrypt-later actually matters. If you're encrypting backups that get rotated every 90 days, it doesn't.
The hybrid construction (Kyber/ML-KEM + X25519) is nice precisely because it's a no-regret move — you don't lose anything by adopting early. If Kyber turns out to have a structural flaw, X25519 still protects you. If a CRQC arrives, ML-KEM still protects you. The only real cost is key/ciphertext size, which for OpenPGP isn't a hot path anyway.
The interesting question is what happens to long-lived smartcard/HSM-backed keys. Those typically have a 5–10 year lifecycle and most hardware won't grow ML-KEM support without a hardware refresh. That's where I'd expect the first real compatibility headaches.
Some Hardware Security Module manufacturers were smart enough to include FPGAs in their products, which they can now use to accelerate PQC algorithms without a hardware refresh.
The trouble is that PQC already has inherent size/performance downsides, and it won't benefit from the decades of optimizations that classical algorithms had. Expect a hefty performance tax for some time.
> introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption algorithm
Funny to read 1-liner changelog versus the plethora of articles just few years ago along the line of "Quantum computer, it might just change our entire lives and make privacy impossible!".
The simple addition (of a not so simple algorithm) to the software (and few others, e.g. OpenSSL) and voila, me can move on with our daily lives. Cryptography and computational complexity are truly amazing.
It reminds me a lot of Y2K. The fix is simple, but finding the places where it's needed and doing it in a compatible way are absolutely non-trivial problems. The best we can hope is the same as Y2K: the plethora of articles convince businesses to invest large amounts of money to migrate algorithms, so that when a quantum computer arrives it won't be a big deal.
> it won't be a big deal.
This isn't a space I know too much about, but even if we all start using quantum-safe encryption for everything today, won't the arrival of quantum computers that can break traditional encryption not still be a big deal?
Given that intelligence agencies, tech companies and various bad actors have been storing encrypted data for a long time, hoping to decrypt when (if?) that day comes?
Definitely, but then the damage is limited to the encrypted data that those actors managed to intercept some years before. Compared to QC arriving to an unprepared world, that's a very limited impact.
Intelligence agencies and companies for which industrial espionage is an actual concern will re-encrypt their data storage, or have already done so. The only risk is on data that was already obtained with a vulnerable encryption. So there is some risk that a few secrets are lost, but it won’t be everything. And if you were to start now and quantum decryption isn’t viable for a decade then any secrets that do get exposed are surely less of a problem than if they were discovered today.
Sure it's still a big deal but it's not as if suddenly everybody get a quantum computer and can use it nilly-willy. It will be (or is) scarce enough that information has to be selected as critical in order to be deciphered a posteriori.
The time between the moment the information is recorded and when it's deciphered is what matters, rarely the information itself abstracted from all context.
So even if suddenly having a classical cryptography is broken, trivially, then there still need to be a way to search through it.
Typically for a random person that means their credit card pin and their email password for example. Well, you chance that and if, say the NSA, can decipher your old email password even 1 minute after you changed it, no big deal. If they can decipher your old emails it might be a big deal but probably not. I would argue it depends on actionable information (e.g. a coup happening tomorrow) and legal information (e.g. the proof that a certain person was an informant and should be extradited).
So... I would argue historically, huge deal, daily life... probably not much for most.
Could we finally get SHA256 fingerprints. Or BLAKE2, or SHA3-256, or SHAKE256, or BLAKE3, or LITERALLY ANYTHING BUT SHA-1, pretty please?
Yes. Both standards proposals have SHA256 fingerprints.
Not that there is anything wrong with SHA1 fingerprints in practice. The sort of collisions that SHA1 is susceptible to are not an issue in this particular application. With SHA256 fingerprints people would still be using 64 bit key IDs, just like they are doing now.
Rfc9580 is not a proposal anymore, it's a published RFC.
(I suppose strictly speaking it's still a "proposed standard" vs "internet standard", but so is basically everything else)
GnuPG Version 2.5.19
The 2.5 series are improvements for 64 bit Windows and the introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption algorithm.
The old 2.4 series reaches end-of-life in just two months.
Does it implement the hybrid version ML-KEM-768 + X25519 or ML-KEM-768 only ?
The X25519 key could remain in hardware keys for a while til manufactures catch up.
If I understood the code correctly, it always use the hybrid version.
> Kyber is always used in a composite scheme along with a classic ECC algorithm.
I don't know enough about either the technical nuance or the political drama, but some observers have noted that GnuPG's implementation is (deliberately?) incompatible with the IETF's standards. It's not clear why.
https://floss.social/@hko/116459621169318785
As far as I understood it: GnuPG started to implement stuff from the standard before it was finished, the standard continued to improve and GnuPG refused to change code already written.
Combined with some personal drama.
it's not that simple. the new standard is a complete rewrite of the old one. they are not even compatible anymore. things the old standard used to support are not supported in the new standard. that makes any implementation of the new standard incompatible with implementations of the old one. GnuPG simply refused to stop supporting the old standard and decided to fork the standard itself. on the personal drama my interpretation is that it resulted from people backing the new standard being unhappy that GnuPG didn't go along.
my opinion is that rewriting standards like that is the result of design by committee. everyone wants to put their mark on it. designing a new standard is fine, but the new standard should have also received a new name, or it should at least have been acknowledged that the old standard still needs to be supported until enough time has passed that the old standard is no longer in use. (which could take decades if not more if we want to be realistic and consider that encrypted data at rest could linger around pretty much forever unless actively re-encoded.)
(source: i talked to a GnuPG developer)
LibrePGP is also a rewrite. To keep supporting legacy v4 you have to keep having v4 code no matter if the new thing you add is v5 (LibrePGP) or V6 (the RFC)
actually neither are complete rewrites. i played around with diff and found that the new version of OpenPGP seems to keep about 60% of the old one and LibrePGP seems to keep 90%.
so the rewrite claim was exaggerated. i didn't compare the stuff that was added or merged.
> the new standard is a complete rewrite of the old one. they are not even compatible anymore.
My honest first reaction to this statement would get me permabanned from this site, so here’s the polite version:
This is nonsense on stilts. It is so ill-informed and baseless I struggle to understand how anyone who has read the RFCs in question could possibly come to this conclusion. It is hooey.
> things the old standard used to support are not supported in the new standard.
Aside from deprecating some ancient cryptographic algorithms that nobody uses any more, everything from RFC4880 is in RFC9580. Can you point out a concrete example of something (non-obsolete!) that is missing?
> that makes any implementation of the new standard incompatible with implementations of the old one.
That is news to every openpgp implementation other than gnupg, which have happily implemented both. Even RNP have it in a feature branch somewhere.
> (source: i talked to a GnuPG developer)
Which one? When? It would genuinely help if they would go on the record. I strongly suspect their actual opinion would differ from what you’ve reported here. There’s enough hearsay nonsense about the schism floating around the internet as it is, without adding to it.
From the GnuPG prospective RFC-9580 is a deliberate fork away from what agreement could be achieved. Basically the faction that is now called RFC-9580 (mostly Sequoia and Proton) wanted to make a lot of changes to the existing standard but the faction that is now called LibrePGP (mostly GnuPG and RNP) was not convinced that those changes were necessary.
Traditionally the OpenPGP standards process has been very conservative and minimalistic. GnuPG comes from that tradition. So the RFC-9580 faction created their own maximalist version of the standard and are actively promoting it as the standard.
So from a user perspective, there are two incompatible proposals out there. It's a mess. So it is better to aggressively ignore them both and maintain interoperability by sticking with RFC-4880 (OpenPGP). That might be a problem if you for some reason are still concerned about a quantum attack against cryptography as the post quantum stuff has gotten caught in this schism. It is certainly something that the users need to keep in mind.
> […] and are actively promoting it as the standard.
Well:
> Category: Standards Track
* https://datatracker.ietf.org/doc/html/rfc9580
It is a standard proposal, which is why it's in the standards track. The point was that it is not the only (the) standard, and not the universally accepted one.
A few points about the IETF process:
- As a practical matter, anything that is a Proposed Standard RFC is a standard. In principle, there is a two-level system with PS and Internet Standard (down from three levels) but most WGs don't bother to advance specifications past PS. For example, TLS and QUIC are both PS.
- RFC 9580 obsoletes RFC 4880, so from the perspective of the IETF, it supersedes it. Of course, this doesn't make people do anything.
It is very hard to prevent a proposal from becoming a RFC. You have to generate ongoing opposition for longer than the supporters. FWIW, here is the LibrePGP proposal:
* https://datatracker.ietf.org/doc/draft-koch-librepgp/
Observing the OpenPGP schism mess I think I have gained some insight as to why some RFCs become so bloated. For example it has been recently pointed out that there are 60 RFCs for TLS (with 31 drafts in progress)[1]. The RFC process seems to be more optimal during the design phase. Once we have an established standard there should to be some way to force those that propose changes/extensions to provide appropriately strong justifications for those changes/extensions. Right now it is a popularity contest and there will always be more people out there in favour of changes/extensions than those willing to endlessly fight against those changes/extensions. Because cryptography is so specialized and obscure, the users tend to get left out of the discussion.
[1] https://www.cs.auckland.ac.nz/~pgut001/pubs/bollocks.pdf
> https://datatracker.ietf.org/doc/draft-koch-librepgp/
"Intended Status: Informational"
And anyone can put forward a draft. Here's one for "IPv8" with increased security where "manageable element in an IPv8 network is authorised via OAuth2 JWT tokens"
* https://www.ietf.org/archive/id/draft-thain-ipv8-00.html
> It is very hard to prevent a proposal from becoming a RFC. You have to generate ongoing opposition for longer than the supporters.
I don't think this is really true. A huge fraction of proposed documents just go nowhere, and it's really quite common to see a new proposal get presented and be shot down by one or two people (Source: I've been one of the people doing the shooting down on more than one occasion)
> It's not clear why.
The situation is farcical, and stems from the double bind that PGP has been in for at least 20 years: the standards are bad and need modernization, but it’s impossible to modernize them because the single thing that retains “serious” users of PGP is backwards compatibility.
The end result of this is a version of Weekend at Bernie’s where both GPG and OpenPGP are fighting over how to dress up the corpse, while the rest of the world has moved on.
> The end result of this is a version of Weekend at Bernie’s where both GPG and OpenPGP are fighting over how to dress up the corpse, while the rest of the world has moved on.
Unfortunately there's something akin to a conflict of interest with both RNP and OpenPGP. OpenPGP guys have gpgsm, and RNP people also maintain the S/MIME part in Thunderbird. Both have stagnated and are holding back what would have otherwise moved on.
PGP covers the case where data is encrypted and might stick around in that state for a long time. Decades. So backwards compatibility is essential.
Fortunately we can use the existing standard (RFC-4880) in a way that is completely secure. Remember, we are talking about the standard that was in effect when the Snowden leak revealed that PGP is on a very short list of things the NSA has no access to. There is no reason to think that has changed since then.
I’m sorry, but it’s beyond the domain of serious discourse to assert that RFC 4880 is “completely secure.” This isn’t a position that even die-hard PGP fans take.
(As just one small example: the only mandatory symmetric cipher in 4880 is 3DES, and nobody serious is recommending 3DES for long term stored encryption in 2026.)
I stated that it was possible to use RFC-4880 in a way that is completely secure, not that every possible use is completely secure.
Your example mentions 3DES. 3DES is secure. The reason it is not recommended is because 128 bit block lengths allow longer file/message lengths than 3DES can accommodate on one key. At any rate, RFC-4880 permits the use of AES and that is what is normally used.
This is incongruous with your original argument: AES is optional, so anybody doing cold storage with PGP on messages they don’t fully control (again, the backwards compatibility story) is going to end up using 3DES.
And no, you can’t brush aside 3DES being insecure for large messages and then call it secure. Modern cryptographic tools don’t allow that, because there is (again) universal consensus that it’s insecure.
Not every project has decided to let microsoft sign releases instead of checking developer's signatures.
Short version: Werner Koch personally hates some people involved with the RFC9580 standardization, and cannot emotionally bear working with anything even loosely associated. He also struggled accepting anyone's opinion but his own while editor of the draft back then.
Search for "asking the editor to step down" to find the moment when the working group decided he was more trouble than it's worth (and GnuPG's support was obviously worth a lot in the openpgp community).
been thinking about this a bit. someone just tell me what algo to use and ill start using it now. are the quantum-resistant cryptos significantly slower?
Basically the idea is use hybrid. AES-GCM-256 or ChaCha20-Poly1305 for symmetric encryption (which is already PQ-safe), and ML-KEM looks set to become the standard for key encapsulation.
ML-KEM-768 is fast as an algorithm, faster than X25519 in terms of pure computation, but uses large keys, so has higher overheads on small payloads. Most of the time, they’re about equal, or the absolute time is so slow it doesn’t matter.
Most folks now are doing hybrid ML-KEM and X25519 to guard against undiscovered flaws in ML-KEM.
For people reading this, you may want to know the the NSA is allegedly trying to weaken hybrid ML-KEM and X25519 down to just ML-KEM. This is a good thing to pay attention to!
Here is a 6-part article about the topic: https://blog.cr.yp.to/20251004-weakened.html
> Here is a 6-part article about the topic: https://blog.cr.yp.to/20251004-weakened.html
* https://news.ycombinator.com/item?id=45477206
* https://news.ycombinator.com/item?id=45477206#unv_45477799
See various "NSA and IETF":
* https://news.ycombinator.com/from?site=cr.yp.to
I haven't met a single cryptographer who takes this series of posts seriously and if you have I'd love to talk to them.
is this insinuating that we, collectively, are not 100% confident that ML-KEM on it's own is going to be enough & deduct that the NSA wants the omission of X25519 as sort of a backdoor possibility?
It’s worth noting that e.g. the Go stdlib has this hybrid construction built-in via crypto/hpke.
thank you!!! i shall be using this immediately
So low not so slow
this is great, thanks. i'm a little lost on where I even need to apply this in my own work. for the most part I can think of like a small handful of places where i just symmetrically encrypt at rest, im guessing those should be updated. but for other things, i guess theres going to be a lot of waiting for a platform i dont control for instance to update it's support for things like private/public key authentication and more. i understand openssl supports a lot of these pq methods now, trying to gauge how much of a head start i can reasonably get.
> ChaCha20-Poly1305
ha! i ran into this when looking at the source for yaak (guy who made the insomnia rest client who's now making yaak). i never got to the bottom of how it worked.
> for the most part I can think of like a small handful of places where i just symmetrically encrypt at rest
Current best practices for symmetric encryption are considered PQ-safe (provided your key length is long enough). The real question the above algorithms solve is how do you safely share the key for the symmetric encryption. That’s where X25519 and ML-KEM come in. X25519 is not PQ-safe, but it is very well studied and considered robust. ML-KEM is PQ-safe, but new, and not as well tested/audited.
I believe ML-KEM is the standard algorithm for post-quantum asymmetric encryption. I think it's slower mainly because there's not good hardware support, but it shouldn't be a big deal because most encryption is hybrid where you only use the asymmetric crypto briefly to share a secret you can use for symmetric cryptography.
ML-KEM based on a lattice problem called "Learning With Errors", and there are similar lattice-based algorithms which have no known quantum speedup. Most traditional asymmetric encryption algorithms are based on number-theoretic assumptions like the discrete logarithm problem or the RSA assumption, which are broken by Shor's algorithm.
Symmetric cryptography (AES and SHA hash functions) are post-quantum resistant for now. Grover's algorithm technically cuts their asymptotic security in half, but that doesn't parallelize, so practically there is no known good quantum attack, and cryptographers and standards agencies tend to not worry about that. You can keep using those.
[edit: according to the sister comment posted simulataneously ML-KEM is faster than X25519. good to know!]
For something like PGP, any performance difference wouldn't matter. There is one message and the key agreement is done once. As long as things are fast enough to be imperceptible to the user we are fine.
cool, now my emails that nobody's reading anyway are safe from quantum computers that don't exist yet
it's mostly to make clowns repeating "It's not PQ secure therefore bad" happy I think