Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Literally the only thing I ever actually used gpg for was file encryption. I tried dong key management and signatures for a very brief period 20 years ago and gave up because no one else was doing it and it was annoying trying to do the right opsec things with no payoff.

Ever since then, as far as I can tell there has been a very small very niche group who use gpg for anything other than file encryption. So age is the obvious choice for the vast majority of us and it's adoption seems to be reflecting that.



By very small niche group, you mean every maintainer of every widely used production linux distribution and most of the core packages that form the supply train trust layer for the entire internet? Or every reasonably competent security vulnerability disclosure team? (Even Google and Apple!)

PGP is the only standardized cryptographic online identity layer we have and still very heavily used by anyone working on security critical software for signed commits, signed reviews, system administration, etc.

Honestly I find it hard to take anyone seriously who works in any engineering role where security matters that is -not- using PGP smartcards to sign and push their commits, sign code reviews, sign build reproductions of container images, encrypt their passwords, etc.


Yes. That they have a large impact does not change the fact that not I nor anyone in my close circle uses it for that. It has been relegated to a specialized domain and use case. I have no use for maintaining a signing key. All the communications I need to secure and verify identity use a different technology for it. The fact that some tools I run might use it in the background is entirely abstracted away for me and they could get swapped by something else without me once noticing.

My point still stands.


If you write open source code others rely on, or have any online identity that could be used for harm if stolen, it is irresponsible to not have a well published public identity signing key that cryptographically ties together your online presences to make you hard to impersonate. A key used to sign commits, binaries, code reviews, emails, or anything else of public value you produce.

If you do not do anything of consequence outside of corporate walls and are just a passive consumer of technology, then you probably do not need one.

The fact you have keybase in your profile indicates to me that you at least at one point mildly cared about having a cryptographic identity. Keybase just happened to have been a wildly broken implementation. Keyoxide is the path today.


Except that I do all of those things and I don't care about it at all nor does the absence affect me in any way. Keybase was a nifty chat.


Fair enough. If you care about your identity so little, I expect when your personal domain ever expires you will not mind if I buy it and impersonate you? It would be valuable for my supply chain attack education work.


Have at it. It's not really a part of my identity. Nor has it ever been protected by GPG.


Thank you for your comment. For a minute I thought I was going insane because there is no way that GPG/PGP is used only by the minority. Literally everyone uses it, even non-techies.

phew

> any engineering role where security matters that is -not- using PGP smartcards to sign and push their commits, sign code reviews, sign build reproductions of container images, encrypt their passwords, etc.

I agree. Even without smartcards, at the very least sign your commits, among other things. Absolute minimum. Very low bar.


Every time I get a dev or executive sending me a Slack message saying "can you reset my password" or "can your provision me in...", my very next reply is "please send me your public key".

They do not get their credentials until they do so. And once they do, our security posture gets better and better.


I think I'm missing something, how does asking for their public key improve security or verify their identity?


As for how it improves security, I'm going to hazard a guess that many of the people sending zikduruqe those messages hadn't previously set up a PGP key. So by asking for the public key and refusing to send them the credentials until he receives it, he's forcing them to set one up, which then makes it possible for them to do things like sign messages. Just making someone set up a keypair doesn't mean they'll use it correctly, but it's hard to argue against the idea that a company's security posture is improved when more people have PGP keys.


It’s so easy to use insecurely that I will argue that employees setting up PGP keys and then potentially trying to use them does weaken the company’s security posture.


I agree it is easy for people to shoot themselves in the foot with many historcal PGP tools, which is exactly why we made keyfork.

It generates modern ECC PGP keychains with best practices in one shot, with multiple reasonably secure user friendly paper or smartcard, backup solutions.

You will really know what you are doing to force keyfork to generate an unsafe keychain. Especially if you use it on AirgapOS, which ships with it.


Care to elaborate on this? How come using PGP insecurely is somehow more insecure than not using it at all? And what do you exactly mean by using it insecurely? Care to give me an example of this insecure use of PGP?


Asking for their public key lets you encrypt messages that only their private key can decrypt, and verify signatures they create. It doesn't by itself prove their real identity, you still need to verify the key's authenticity (e.g. via fingerprint comparison or a trusted keyserver) to avoid impersonation or man-in-the-middle attacks.


He's asking how this procedure learns which public keys are trustworthy, not how asymmetric cryptography works.


I think what I’ve gathered is that the person I replied to is going with a TOFU model of key security (trust on first use), or is just seeking to avoid plaintext passwords in slack messages and is treating the key as disposable for the one-time encryption of the password.

Presumably they must trust that the user messaging them on slack is indeed who they say they are and is in control of the account.

If I’ve understood correctly, this seems like one of those cases where PGP is adding quite little security to the system, and may be preventing the implementation of more secure systems if it is providing a false sense of security.

But it’s probably just someone doing their best in a system beyond their control.


Like I have said in another comment, the question of identity verification makes no sense in this context. The identity verification problem is orthogonal to the encryption scheme.

See: https://news.ycombinator.com/item?id=45919561


That is an extremely weird argument. They aren't separable concerns. If you have a trusted identity in place you could use a password-protected AES ZIP file for all the encryption matters.


There are too many threads, see: https://news.ycombinator.com/item?id=45919651. I don't see why we got here from PGP though.

> I think I'm missing something, how does asking for their public key improve security or verify their identity?

OK, so this was the question. My response should have been "it does not necessarily verify their identity". I mentioned some of the mechanisms for identity verification in the other thread.


It allows the security guy (in this case, zikduruqe) to send an email that can only be read by the person who possesses the corresponding private key. Which means that either the email is going to the executive who really does own the account, or else that the attacker has already breached that executive's laptop to the point of having acquired his private key (and passphrase, if there was one), in which case phishing attempts to get a password would be utterly pointless (like trying to pick the lock of the front door when you're already inside the house).


So I email you asking for something. You say “send me your public key first”.

I generate a key pair and send the public key to you. You encrypt response giving me what I wanted.

How do you have any idea that I’m the person I said I am?


Well, I've been assuming that zikduruqe is competent and knows how to pick up a phone and call the person (looking up a phone number in the company database) to verify that the public key came from him via fingerprint-checking over the phone. Sometimes people leave steps out so as not to write essays in a comment box.


My confusion here is that if you're doing that, why bother with the cryptography? You can just look the person up in the company database, call them, and say "Hey! Did you just request a password reset?".

If one of your pre-requisites is "There is a trusted out-of-band way for me to validate comms with this person", the crypto is just extra bits.


The question makes no sense in this context. The identity verification problem is orthogonal to the encryption scheme.

This problem exists regardless of PGP. If someone's Slack is compromised:

With PGP: attacker gets credentials encrypted to their key

Without PGP: attacker gets plaintext credentials

But both fail at the same point: verifying who you're talking to. That's not a PGP problem, it's a "doing password resets over unauthenticated Slack" problem.

PGP does provide multiple identity verification mechanisms, e.g. web of trust, key signing, fingerprint verification, in-person key exchange, and Keybase-style social proofs linking keys to verified accounts.

The workflow described just doesn't use them. Identity verification is required for ANY secure credential exchange system; you either verify keys properly (signed by trusted parties, verified fingerprints, pre-enrolled, social proofs) or you have the same problem with passwords, TOFU SSH keys, or anything else.

Are you criticizing PGP for not solving a problem that the workflow simply didn't implement a solution for?


I’m saying asking somebody for a public key in the example scenario is pure security theater, no matter if it’s PGP or some other scheme.


That's only true if the key's authenticity isn't verified. If you just accept any key a person gives you, then yes, it's meaningless. But if you independently confirm the key's fingerprint through a trusted channel, it becomes a real security measure that prevents impersonation and ensures confidentiality.

The workflow as described (no verification step) is theater. But that's true for any credential exchange without identity verification, PGP or otherwise. The issue isn't PGP, it's skipping the verification step. PGP provides the tools (fingerprint verification, web of trust, key signing), but you have to actually use them.


I didn’t say the issue with this example scenario is PGP.

The scenario is theater (if you have an out of band lookup for verification, just use that. Don’t bother asking for pub keys).

Also in parallel PGP is trash.


The out-of-band verification is for initial key enrollment, not every credential exchange. You verify the fingerprint once through a trusted channel, then use that verified key indefinitely without needing the trusted channel again. That's the entire value proposition: establish trust once, communicate securely many times.

Without this, you'd need out-of-band verification for every single credential exchange, which doesn't scale.

As for "PGP is trash"... That's a different argument entirely, and you've provided zero technical justification for it. If you have specific criticisms of PGP's cryptographic primitives, key management model, or implementation security, make them.


No thanks, I think we’ve gotten all we’re going to get out of this thread.


I agree.


Neither Google nor Apple rely on PGP for vulnerability disclosure handling. Lots of organizations publish a PGP key, but in practice rarely use them.


Citation needed.

Both Apple and Google have updated these pages with security disclosure PGP keys in the last year.

https://support.apple.com/en-us/101985

https://about.google/company-info/appsecurity/

I design most corporate bug bounty programs the same way.

Sure, people rarely use PGP, but the ones that do are usually serious and high quality, and we let them skip the tier 1 queue. Script kiddies never know how to encrypt things.


It has not at all been my experience that PGP-encrypted bounty submissions are fast-tracked and most (almost all, in fact) good bounty submissions aren't encrypted. Google downplays PGP in the link you provided. Apple doesn't ask people to use theirs.


Are we looking at the same links?

It is provided as an option, the ONLY option, for those that feel encryption is merited for a sensitive report.

Google page: "If you feel the need, please use our PGP public key to encrypt your communications with us."

Apple page: "Apple security advisories are signed with the Apple Product Security PGP key. Sensitive security information may be encrypted to this key when communicating with Apple Product Security."


I think you missed some subtext that I thought was pretty obvious which is that most people don't encrypt bug bounty submissions in 2025.


> Neither Google nor Apple rely on PGP for vulnerability disclosure handling.

They support and rely on it exclusively for security disclosures sensitive enough to merit encryption.


"Sensitive enough" is smuggling in a presumption of yours that isn't supported by evidence. Whether or not submissions are PGP-encrypted (in my experience: they very rarely are) is uncorrelated with their severity.


In my experience building bug bounty programs for many high risk orgs, PGP reports are rare, as you indicate. Maybe a couple a year.

That does not make them any less critical or relied on. We always took them super seriously and read them offline because they were often highly sensitive real disclosures that merited being exposed only to a very small circle of people with security team decryption smartcards.

It is a safe assumption skiddies do not know how to use PGP so low skill reports with PGP almost never happened.

I would never run a bug bounty program without having an highly visible public key to encrypt highly sensitive reports to.


You haven't responded to my point. I would happily run a bounty program without a PGP key; in fact, I'd recommend not publishing a PGP key, and instead making arrangements to communicate a Signal identity.


If I as a security researcher want to send a super sensitive disclosure to an organization like "I have reason to believe your devices are compromised", I want to be damn sure it goes to a PGP key held on smartcards that decrypt reports on airgapped operating systems.

I also may want to do this anonymously.

Signal is the wrong tool on both counts. Fine to have as an option but I would never have that as the only option.


That is very silly. I founded and ran what was at the time the 2nd or 3rd largest software security consultancies in North America, then acquired and rolled up into what was the largest software security consultancy in North America (NCC Group US), our client list was a phone book of every major tech firm and every major manufacturer and infrastructure company with a significant code footprint, our firm at its peak was generating many game-over findings per day across a wide range of companies, and most of our clients would have gotten angry at us if we told them to install PGP.

More of them required password-protected ZIPs than PGP, so much so that we had a whole complicated document to ensure we were using the versions of ZIP file programs that used AES and not Bass-o-matic.

Apple and Google routinely get findings worth 6-7 figures that aren't PGP encrypted.

PGP-encrypting bug bounty submitters are mostly LARPing.

I will take the Pepsi Challenge with you on experience with bounty programs if you'd like. But here's another question: have you ever been on a major-vendor embargo list before? Was it your experience that those embargo lists were uniformly PGP-encrypted? (I can spoil this one for you if you like).

Tell me more about how major vulnerability disclosures depend on PGP, please.


> That is very silly. I founded and ran what was at the time...

This just seems to be an appeal to authority. I will just say your credentials do not impress me.

Lets just stick to two security engineers on different sides of the same industry having a technical merits discussion.

In any event I did not once claim PGP encrypted reports are common, but I can say of the dozens I have received, most were very high quality from actual security researchers, and some have made me very happy I insisted such reports be decrypted offline on a machine I absolutely trust.

It is good to give people options, and especially at least one that can be used anonymously with a fully open source operating system using a decentralized very widely used and established standard.

I for one have made more than a few very sensitive security reports and do not own a Google or Apple controlled device or a Signal account.


I don't care if my credentials impress you. That wasn't my point. You just conceded the point I was actually making.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: