There was a discussion here on hn about OpenAI and it's privacy. Same confusion about e2ee. Users thinking e2ee is possible when you chat with an ai agent.
>Users thinking e2ee is possible when you chat with an ai agent.
It shouldn't be any harder than e2ee chatting with any other user. It's just instead of the other end chatting using a keyboard as an input they chat using a language model to type the messages. Of course like any other e2ee solution, the person you are talking to also has access to your messages as that's the whole point, being able to talk to them.
I do not think this matches anyones' mental model of what "end-to-end encrypted" for a conversation between me and what is ostensibly my own computer should look like.
If you promise end-to-end encryption, and later it turns out your employees have been reading my chat transcripts...
I'm not sure how you can call chatgpt "ostensibly my own computer" when it's primarily a website.
And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.
If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?
It's still a disingenuous use of the term. And, if TFA is anything like multiple other providers, it's going to be "oh, the video is E2EE. But the 5fps ,non-sensitive' 512*512px preview isn't."
> it's primarily a website … unambiguously possible[sic] for chatGPT … happens to also be the message platform
I assume you mean impossible, and in either case that’s not quite accurate. The “end” is a specific AI model you wished to communicate with, not the platform. You’re suggesting they are one and the same, but they are not and Google proves that with their own secure LLM offering.
But I’m 100% with you on it being a disingenuous use.
No, No typo- the problem with ChatGPT is the third party that would would be Attesting that's how it works, is just the 2nd party.
I'm not familiar with the referenced Google secure LLM, but offhand- if it's TEE based- Google would be publishing auditable/signed images and Intel/AMD would be the third party Attesting that's whats actually running. TEEs are way out of my expertise though, and there's a ton of places and ways for it to break down.
> And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.
This is basically the whole thrust of Apple's Private Cloud Compute architecture. It is possible to build a system that prevents user2 from reading the chats, but it's not clear that most companies want to work within those restrictions.
> If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?
If they marketed it as end-to-end encrypted? 100%, unambiguously, yes. And certainly not without I, as the user, granting them access permissions to do so.
If you have an E2EE chat with McDonalds, you shouldn't be surprised that McDonalds employees can read the messages you've sent that account. When messaging accounts controlled by businesses then the business can see those messages.
This is why I specified "mental model". Interacting with ChatGPT is not marketed as "sending messages to OpenAI the company". It is implied to be "sending messages to my personal AI assistant".
Yeah of course, technically that is true. Still when talking about e2ee in any context it implies to the non technical user: The company providing the service cannot read what I am writing.
That's not given in any of those examples. In the case of ChatGPT and this toilet sensor e2ee is almost equivalent to 'we use https'. But nowadays everybody uses https, so it does not sound as good in marketing.
Yes but National Security Letters make that pointless. You can't encrypt away a legal obligation. The point of e2ee is that a provider can say to the feds "this is all the information we have", and removing the e2ee would be noticed by security researchers.
If the provider controls one of the ends then the feds can instruct them to tap that end and nobody is any the wiser.
The best you can do is either to run the inference in an outside jurisdiction (hard for large scale AI), or to attempt a warrant canary.
> Yes but National Security Letters make that pointless
It seems ridiculous to use the term "national security letter" as opposed to "subpoena" in this context, there is no relevant distinction between the two when it comes to this subject. A pointless distraction.
> You can't encrypt away a legal obligation.
Of course you can't. But a subpoena (or a NSL, which is a subpoena) can only mandate you to provide information which you have within your control. It can not mandate you to procure information which you do not have within your control.
If you implement e2ee, customer chats are not within your control. There is no way to breach that with a subpoena. A subpoena can not force you to implement a backdoor or disable e2ee.
I believe we are in agreement. If you are a communication platform that implements e2ee then you provide the guarantee to users, backed by security researchers, that the government can't read their communications by getting a subpoena from the communication platform.
The problem with AI platforms is that they are also a party to the communication, therefore they can indeed be forced to reveal chats, and therefore it's not e2ee because defining e2ee that way would render the term without distinction.
They once shipped a backdoor in their macOS app. It was noticed and called out and they refused to remove it. It took Apple blacklisting it for Zoom to finally take action.
I would say Telegram is communicating their level of encryption pretty good ("client-to-client" and "client-to-server" is a good way to avoid the ambiguity of e2e).
The problem is that you have to trust that they'll stay that way, and we have no way of proving that the app that runs on your phone comes from the same source that they publish.