crote 1 day ago

> With this approach, they have shown a reduction in CRL data from a list of all enrolled and unexpired certificate serial numbers from 6.7G to a filter of just 1.3 MB.

It actually gets even better: Mozilla's CRLite deltas with generation on a 6-hour interval are only about 60kB! Even with 10 billion internet devices fetching every 6 hours you're looking at a mere 222Gbps of bandwidth: a lot, but plenty of tech companies do more.

> At this point, why not just use DANE (RFC 6698), store the public keys in the DNS, rely on DNSSEC to provide the necessary authenticity, and use DNS TTL settings to control the cached lifetime of the public key? With a combination of DNSSEC Chain Extensions and DNSSEC stapling, it is possible to perform a security association handshake by having the server provide the client with (...)

Yeah, until an attacker gets access to DNS, stores a record with a TTL of 1 year, and staples that - of course after poisoning the caches of major DNS resolvers with the address of the attacker's server.

  • pjf 1 day ago

    On the other front (Chrome), their crlset-tools [1] just fetched me 64k (~1.1MiB) of revoked certs just fine, contrary to the article (quote: "After retrieving and running this tool, I was surprised to see a total of 1,081 revoked certificate serial numbers in this list. This seems oddly low.")

    [1] https://github.com/agl/crlset-tools

  • xorcist 1 day ago

    > until an attacker gets access to DNS, stores a record with a TTL of 1 year,

    DNSSEC may have problems, but that's not how the trust model works. Also signing is separate from authoritative DNS, so you'd need to compromise the signing itself, not just DNS. Should that happen, you are still limited by the upstream record siganture lifetime.

thayne 1 day ago

> At this point, why not just use DANE

Interests of the existing PKI industry may be the source of some friction, but the bigger issue is that DANE depends on DNSSEC, which is not widely deployed, and sometimes actively avoided due to its complexity and ease of breaking you site.

Don't get me wrong, I'd love it if DANE, or something similar caught on, but I don't think it is practical until something changes to make DNSSEC (or equivalent) common.

  • PunchyHamster 1 day ago

    > Interests of the existing PKI industry may be the source of some friction, but the bigger issue is that DANE depends on DNSSEC, which is not widely deployed, and sometimes actively avoided due to its complexity and ease of breaking you site.

    I have a feeling it is "actively avoided" because vendors don't want to lose control of the cert ecosystem. Allowing user to just generate a domain for themselves means it will never get logged in central log and so can't be automatically found by crawlers by the big guys

    • xorcist 1 day ago

      This is public data so the big guys could absoltely crawl it. But we should not underestimate the size of the PKI industry, several large actors make good living from the existing web PKI and they will not change unless their very existence is threatened.

  • jeroenhd 1 day ago

    If DANE were to roll out to browsers, I think plenty of people would rather use it than centralizing on Let's Encrypt.

    DNSSEC isn't easy, but either is certbot. DNSSEC also isn't that hard if you're not self-hosting your DNS servers (and even then it's easy if you pick a modern DNS server).

    Most domains seem to use their registrars free DNS servers. For those domains, DNSSEC is often just a checkbox. I just activated DNSSEC on three domains by hitting that checkbox. A certbot-style tool can use the same API many existing certbot plugins already provide access to for setting up DANE.

    However, until browsers actually implement DANE, it's pretty useless. I know some people use it for mail servers (for some reason, don't see why they can't use Let's Encrypt for that) but even there it's optional.

    • thayne 19 hours ago

      > DNSSEC also isn't that hard if you're not self-hosting your DNS servers

      It isn't that hard if:

      - you use your domain registrar for serving your DNS as well. Even if you aren't "self-hosting", but use a different service for DNS hosting than you registered your domain with, then it can be complicated to coordinate between them. And

      - your domain registrar makes it easy to set up. For some it is just a checkbox, for others, you have to contact customer support, and sometimes pay more.

      Also, the server is just half of the problem. It also requires dns resolvers and clients to validate DNSSEC, which often isn't done today, and even when it is, often fails open, because so many domains don't use DNSSEC, and intermediate resolvers don't always support it. Validating DNSSEC can also hurt performance, in part because of the larger response sizes.

      • jeroenhd 16 hours ago

        > Even if you aren't "self-hosting", but use a different service for DNS hosting than you registered your domain with, then it can be complicated to coordinate between them

        Indeed, you'd need to copy-paste four text fields

        > for others, you have to contact customer support, and sometimes pay more.

        That's ridiculous, I've never seen any registrar do that. Even if you do choose a terrible registrar, actual DANE rollout in browsers would put pressure on them to get their shit together.

        As for DNSSEC validation: validation currently seems to happen between 0 to 95% according to https://stats.labs.apnic.net/dnssec

        Obviously, for DANE to work, verification must happen. DANE-enabled browsers will enforce validation, or fall back to regular TLS (with the scary warnings if someone stripped DNSSEC for a DANE server, as the certificate doesn't work any more). On operating systems that don't bother with DNSSEC validation, browsers can still query the necessary keys.

        As for performance, DNSSEC does impose extra network traffic, but so does transmitting an intermediate certificate.

        • tptacek 12 hours ago

          Hard to square this with the operational history of DNSSEC at some of the best-resourced ops teams in the world.

          • jeroenhd 2 hours ago

            If you're stuck with something like AWS and their buggy implementation then you might indeed run into trouble. Luckily, normal DNS servers don't have this issue.

            Most people and websites don't have ops teams, though. It's mostly a challenge if you manage your own DNS, which most people don't do.

    • tptacek 12 hours ago

      The "just check a checkbox in your registrar" UX depends on the registrar having custody of your keys. That's not how certbot works.

      • jeroenhd 2 hours ago

        Certbot does domain validation or DNS validation. In either case, your registrar can generate valid certificates for your domain.

Parodper 1 day ago

It's funny to see that the issues with X.509 certificates, are being solved by what X.509 was intended to be used for: a directory system. It's DNS instead of X.500, but it's a start.

bblb 1 day ago

DNS and PKI. Two of the most centralized services in the Internet. Take over both of them, and you have the whole net under your command.

  • pjf 1 day ago

    Good that at least BGP is secure.

    • nanis 1 day ago

      Might want to add /sarc just in case someone believes it :-)

  • jumpconc 23 hours ago

    Just DNS. If you take over DNS, you can get Let's Encrypt to issue any certificate you want.

    • pjf 22 hours ago

      There are situations [1] where you could reliably BGP-hijack the IP prefix of the target domain authoritative nameserver, and obtain your own domain-validated cert for the target (by effectively controlling the zone file contents). And yeah, CAs do have their BGP protections, but still there's at least partial assumption BGP is secure enough to run DNS-based validation for new SSL certs, in our world where DNSSEC is still rare.

        [1] https://www.ietf.org/proceedings/104/slides/slides-104-maprg-dns-observatory-monitoring-global-dns-for-performance-and-security-pawel-foremski-and-oliver-gasser-00.pdf (see slide 15; yeah, it's already a bit old, yet still the case from my practice)
PeterWhittaker 22 hours ago

One quibble with the article: the notion that CRLs have to be large. When I was with Entrust our first releases targeted early Windows versions with limited memory, back when most Internet connections and even local networks were slow.

To ensure that RLs would always be manageable in size, we used distribution points (cRL and issuing) and decided at certificate issuance which RL would contain this certificate's serial number if ever it were revoked.

This approach scaled really well and kept RLs manageable.

There were applications that didn’t understand distribution points and needed the One RL to Revoke Them All, so we supported that as well (as an option, IIRC).