You're comparing Sigstore with the log and assuming ultimate trust in their verification process and CA to GPG without the web-of-trust. This is obviously apples-to-oranges.
You also linked to this random python.org HTTPS page which contains the list of people you are supposed to expect to have signed the Python releases. If this is the root of trust... it might has well have had PGP fingerprints.
The truth is that you login with OIDC and Sigstore signs your artifacts, giving you an attestation that the owner of that email/GitHub/... identity made that artifact, and publishes that to a persistent log. This makes the whole thing great for automation (both of publishing and verifying), but claiming that this adds to security is false.
Their Security Model page clearly outlines the limits of their system and is consistent with my characterization https://docs.sigstore.dev/security/:
> If an OIDC identity or OIDC provider is compromised, Fulcio might issue unauthorized certificates
> If Fulcio is compromised, it might issue unauthorized certificates
You have to trust OIDC providers, you have to trust the CA, and the presence of logs only allows those people to notice unauthorized issuance, not end users.
I think we're going in circles, because I've identified specific ways in which this adds to the security model that self-custodial PGP keys cannot:
* Sigstore uses short-lived keys and short-lived certificates, eliminating an entire common risk class where maintainers accidentally disclose their signing keys. This property alone eliminates the single largest source of illegitimate signing events in ecosystems like Windows software.
* The logs in question are public CT logs. In other words: anybody can audit them for unauthorized issuance, including the legitimate publishing identity. It's not particularly useful for the end (installing) user to audit the log, but it was never claimed that they would find it useful.
For the specific case of CPython, you're missing the point: CPython is an easy case, since the email identities of the release managers are well-known facts that can be cross-checked across python.org, GitHub, etc. Python.org is not currently a root of trust for sigstore, but it is for PGP (again, because anybody can claim an identity in PGP).
There are, of course, limitations. But these limitations are no strictly worse than trusting CA and IdP ecosystems that you're already trusting, which makes them strictly better than mystery meat PGP keys.
You're just replacing a "mystery meat" PGP key with a "mystery meat" email address or OIDC handle. As you point out, committing to one of those can easily be done by posting it on python.org, GitHub, etc with the major difference that PGP fingerprints are cryptographically tied to an identity rather than require a third-party like Sigstore to attest that the person had control of it at some point in the past.
It is also much more likely that someone managed to click one link in a developer's inbox once to complete the automated Sigstore verification, rather than they managed to steal their PGP keyring and passphrase.
I am not a fan of having to trust in developer's key-management abilities but this just shifts the problem very slightly, at significant cost.
The single advantage is obvious: this allows easy automated signing and verification, allowing enterprises to easily check boxes in their supply-chain-security checklist. This is valuable in itself, and I am all for automation, but I don't know why we have to claim that it is "more secure".
> PGP fingerprints are cryptographically tied to an identity rather than require a third-party like Sigstore to attest that the person had control of it at some point in the past.
A PGP fingerprint is tied to a PGP key, which is tied to a claimed identity. Anybody can claim to be you, me, or the President of the United States in the PGP ecosystem. Some keyservers will "verify" email-looking identities by doing a clickback challenge, but that's neither standard nor common.
In theory, you trust PGP identities because of the Web of Trust: you trust Bob and Bob trusts Sue, so you trust Sue. But it turns out nobody actually uses that, because it's (1) unergonomic and doesn't handle any of the normal failure cases that happen when codesigning (like rotation), and (2) it's been dead because of network abuse for years anyways[1].
> It is also much more likely that someone managed to click one link in a developer's inbox once to complete the automated Sigstore verification, rather than they managed to steal their PGP keyring and passphrase.
That's not how Sigstore does email identity verification; it uses a standard interactive OAuth flow. Those aren't flawless, but they're significantly better than a secret URL and fundamentally avoid the problem of secure key storage. Which, again, is actually where most codesigning failures occur.
OAuth flow is even worse, if you find someone's browser open and click the link, it will complete as long as they are currently logged into GitHub/Gmail/whatever provider. I am not claiming that key management is easy or foolproof, but when this is what we're comparing to...
And again, you don't have to use web-of-trust. It is there, which is an advantage. If you don't/can't use that, you can find a PGP fingerprint on a random HTTPS page, which will be just as easy to copy-paste as the list of email addresses you showed me a couple posts up... with the advantage that I can use them for verification directly, rather than involving third-party authorities.
> OAuth flow is even worse, if you find someone's browser open and click the link, it will complete as long as they are currently logged into GitHub/Gmail/whatever provider. I am not claiming that key management is easy or foolproof, but when this is what we're comparing to...
And the same can be said for PGP keyholders. There are very, very few threat models in which an open, logged-in computer is not a "game over" scenario (which is also why most password managers and authentication agents don't consider it a case worth guarding against). In other words: Sigstore is no worse than PGP key management in this manner, but is better in the other ways that matter.
Looking up PGP fingerprints on random HTTPS pages is not a scaleable or ergonomic solution, and not one that has ever succeeded. Remember: that is the status quo with both CPython and Python package distribution, and there is no evidence that either had gained any meaningful amount of adoption (either by packages or end users). The goal here is to enable users to sign packages without doing the things they've demonstrated they won't do.
(Also, we've focused on email identities. A separate goal is to allow GitHub Actions identities, which will require no interaction from a user's browser and has a threat model coextensive with the CI environment that many Python packages are already using to build and publish their distributions.)
> with the advantage that I can use them for verification directly, rather than involving third-party authorities.
I'm not sure what you mean by "third-party authorities" here. As a verifier, your operations can be entirely offline: you're verifying that the file, its signature, and certificates are consistent, that their claims are what you expect, and (optionally) that the entry has been included in the CT log. That latter part is the only online part, and it's optional (since you can opt for a weaker SET verification, demonstrating an inclusion promise).
You also linked to this random python.org HTTPS page which contains the list of people you are supposed to expect to have signed the Python releases. If this is the root of trust... it might has well have had PGP fingerprints.
The truth is that you login with OIDC and Sigstore signs your artifacts, giving you an attestation that the owner of that email/GitHub/... identity made that artifact, and publishes that to a persistent log. This makes the whole thing great for automation (both of publishing and verifying), but claiming that this adds to security is false.
Their Security Model page clearly outlines the limits of their system and is consistent with my characterization https://docs.sigstore.dev/security/:
> If an OIDC identity or OIDC provider is compromised, Fulcio might issue unauthorized certificates
> If Fulcio is compromised, it might issue unauthorized certificates
You have to trust OIDC providers, you have to trust the CA, and the presence of logs only allows those people to notice unauthorized issuance, not end users.