Hacker Newsnew | past | comments | ask | show | jobs | submit | N-Krause's commentslogin

uBlock seemed to have handled it for me.

Not on mobile..

Firefox on mobile can install ublock

On Firefox mobile with unblock, here. Experienced nothing intrusive. It works, apparently. :)

There was a discussion here on hn about OpenAI and it's privacy. Same confusion about e2ee. Users thinking e2ee is possible when you chat with an ai agent.

https://news.ycombinator.com/item?id=45908891


>Users thinking e2ee is possible when you chat with an ai agent.

It shouldn't be any harder than e2ee chatting with any other user. It's just instead of the other end chatting using a keyboard as an input they chat using a language model to type the messages. Of course like any other e2ee solution, the person you are talking to also has access to your messages as that's the whole point, being able to talk to them.


I do not think this matches anyones' mental model of what "end-to-end encrypted" for a conversation between me and what is ostensibly my own computer should look like.

If you promise end-to-end encryption, and later it turns out your employees have been reading my chat transcripts...


I'm not sure how you can call chatgpt "ostensibly my own computer" when it's primarily a website.

And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.

If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?

It's still a disingenuous use of the term. And, if TFA is anything like multiple other providers, it's going to be "oh, the video is E2EE. But the 5fps ,non-sensitive' 512*512px preview isn't."


> it's primarily a website … unambiguously possible[sic] for chatGPT … happens to also be the message platform

I assume you mean impossible, and in either case that’s not quite accurate. The “end” is a specific AI model you wished to communicate with, not the platform. You’re suggesting they are one and the same, but they are not and Google proves that with their own secure LLM offering.

But I’m 100% with you on it being a disingenuous use.


No, No typo- the problem with ChatGPT is the third party that would would be Attesting that's how it works, is just the 2nd party.

I'm not familiar with the referenced Google secure LLM, but offhand- if it's TEE based- Google would be publishing auditable/signed images and Intel/AMD would be the third party Attesting that's whats actually running. TEEs are way out of my expertise though, and there's a ton of places and ways for it to break down.


> And honestly, E2EE's strict definition (messages between user 1 and user 2 cannot be decrypted by message platform)... Is unambiguously possible for chatGPT. It's just utterly pointless when user2 happens to also be the message platform.

This is basically the whole thrust of Apple's Private Cloud Compute architecture. It is possible to build a system that prevents user2 from reading the chats, but it's not clear that most companies want to work within those restrictions.

> If you message support for $chat_platform (if there is such a thing) do you expect them to be unable to read the messages?

If they marketed it as end-to-end encrypted? 100%, unambiguously, yes. And certainly not without I, as the user, granting them access permissions to do so.


Not all messages. The message you sent to support- the answer is implicitly "of course they can read them."

If you have an E2EE chat with McDonalds, you shouldn't be surprised that McDonalds employees can read the messages you've sent that account. When messaging accounts controlled by businesses then the business can see those messages.

This is why I specified "mental model". Interacting with ChatGPT is not marketed as "sending messages to OpenAI the company". It is implied to be "sending messages to my personal AI assistant".

Yeah of course, technically that is true. Still when talking about e2ee in any context it implies to the non technical user: The company providing the service cannot read what I am writing.

That's not given in any of those examples. In the case of ChatGPT and this toilet sensor e2ee is almost equivalent to 'we use https'. But nowadays everybody uses https, so it does not sound as good in marketing.


e2ee implies that there is a third party who can't read the messages. If you are chatting with an AI, who is the third party?

>who is the third party?

My ISP who delivers these chat messages.


Ideally, both OpenAI employees and the 3-letter agencies?

Yes but National Security Letters make that pointless. You can't encrypt away a legal obligation. The point of e2ee is that a provider can say to the feds "this is all the information we have", and removing the e2ee would be noticed by security researchers.

If the provider controls one of the ends then the feds can instruct them to tap that end and nobody is any the wiser.

The best you can do is either to run the inference in an outside jurisdiction (hard for large scale AI), or to attempt a warrant canary.


> Yes but National Security Letters make that pointless

It seems ridiculous to use the term "national security letter" as opposed to "subpoena" in this context, there is no relevant distinction between the two when it comes to this subject. A pointless distraction.

> You can't encrypt away a legal obligation.

Of course you can't. But a subpoena (or a NSL, which is a subpoena) can only mandate you to provide information which you have within your control. It can not mandate you to procure information which you do not have within your control.

If you implement e2ee, customer chats are not within your control. There is no way to breach that with a subpoena. A subpoena can not force you to implement a backdoor or disable e2ee.


I believe we are in agreement. If you are a communication platform that implements e2ee then you provide the guarantee to users, backed by security researchers, that the government can't read their communications by getting a subpoena from the communication platform.

The problem with AI platforms is that they are also a party to the communication, therefore they can indeed be forced to reveal chats, and therefore it's not e2ee because defining e2ee that way would render the term without distinction.


It's possible to produce a technical solution to this using tools like SGX.

I saw a YouTube video claim similar levels of privacy are possible using trusted computing.

I knew I've already seen this. Seemed like a great tool then as well as now. Will definitively deploy it on for my personal file server. Just haven gotten around it.


You cannot compare these examples. There is currently no way to encrypt the user message and have the model on the server read/process the message without it being decrypted first.

Mullvad and E2EE Messengers do not need to process the contents of the message on their server. All they do is, passing it to another computer. It could be scrambled binary for all they care. But any AI company _has_ to read the content of the message by definition of their service.


It's a solved problem. Lumo.


Lumo never promises encryption while processing a conversation on their servers. Chats HAVE to be decrypted at some point on the server or send already decrypted by the client, even when they are stored encrypted.

Read the marketing carefully and you will notice that there is no word about encrypted processing, just storage - and of course that's a solved problem, because it was solved decades ago.

The agent needs the data decrypted, at least for the moment, I know of no model that can process encrypted data. So as long as the model runs on a server, whoever manages that server has access to your messages while they are being processed.

EDIT: Even found an article where they acknowledge this [0]. Even though there seems to exist models/techniques that can produce output from encrypted messages with 'Homomorphic Encryption' [1], it is not practical, as it would takedays to produce an answer and it would consumes huge amounts of processing power.

[0] https://proton.me/blog/lumo-security-model

[1] https://en.wikipedia.org/wiki/Homomorphic_encryption


While I can understand that it's frustrating. Kernel level Anticheat is a abomination in itself and should in no way be supported. It is a security flaw in itself!

Read this: https://gist.github.com/stdNullPtr/2998eacb71ae925515360410a...


It's also unfortunately impossible to have a good competitive multi-player online experience without kernel-level anti-cheat. It's simply too easy to cheat at many of these games in the absence of strict control measures, and even a single cheater can ruin a game session for every other gamer.

No one reached directly for kernel-level anti-cheat. It was the result of an escalation of the sophistication of cheating solutions.


That is completely irrelevant. Users want the game.


While you're at it, you might as well downgrade to Win7. No jokes aside, if all the software you want runs on it, it is fine. Just no security update etc.

I would suggest just switching to Linux and using a VM for things that NEED to be Windows. Games that run kernel level Anti-Cheat won't run, but tbh nothing I would suggest installing anyway.


> VM for things that NEED to be windows.

The big two that spring to mind are online games and Adobe softwares. I don't think a VM can usually meet the performance needed for either.

I do wish more artists would take a chance on open source softwares, but most of the ones I know are still insistent that nothing can ever come close to Adobe. But that's a rant for another time.


Games run pretty great on Linux, but if you do want a VM, passing through a graphics card to that VM via vfio provides 95%+ the performance of native.

Virtual reality headsets with dual 4K screens running at 75Hz+ perform well on a Windows VM done that way. A normal flatscreen game is going to be just fine.


I stayed on Win7 until December 2023, until there was some exploit that was attackable by just viewing an image, so just browsing the web would've made it vulnerable (I believe in the WebP format).

Although it seems there are people still Frankensteining Win7, and even patching DLLs to make the newest browsers/apps still run on it.

Famously MS Teams was really screwed up, but I had to use it for work..


Browsers are the problem.

Firefox outright refuses to install on older Windows versions for a couple of years now. Very lazy and negligent move on Mozilla's part.


If you were as resource-constrained as Mozilla is, you'd drop support for unsupported platforms anyway.


According to this they still support windows 7 through extended support channel: https://support.mozilla.org/en-US/kb/firefox-users-windows-7...

Chrome dropped support two years ago or so.


"Firefox version 115 is the last supported Firefox version for users of Windows 7, Windows 8 and Windows 8.1."

That's a version from over 2 years ago.


ESR version doesn't mean it has never received changes for 2 years.


The problem is not the lack of patches. The problem is with websites refusing service based on client's version from the user-agent or breaking by using cutting edge features without a polyfill.


All the Fairphone Versions support e/OS/ as far as I know. I have the Fairphone 5 with the current e/OS/ version completely un-googled. But you also have the option to allow partial google-fication in e/OS/ so you don't miss out on most of the features and paid-apps you had.


While I agree with you, i feel like that was not the point the author was making.

It more so was a warning that the combination of little reviewed community plugins and a not sandboxed macos binary is a potential risk. And with that sentiment I can also agree.


That was my take too. I am less concerned with an app being simply closed source and much more concerned with closed source coupled with skipping review and the general approved distribution models on the two platforms.



Not false, but they also partnered with every other launch provider under the sun.


Good to know, any source for that? I only did a quick websearch and that was in the first results.

EDIT: Nvm, just now saw the sibling comment with the wiki article.


Do you know if there is a English version of the book?


If the author agrees, I could try to learn Serbo-Croatian (I'm Polish, good with languages) and translate it to English. I'm kinda a burnout Linux geek, who cannot look at computers much more. Translating a book would be fun, but I would need some sponsoring. Amadeusz at [the old name of icloud].com


You may want to find an email provider that has a better spam filter if you want people to actually contact you.


the book is licenced under CC BY-SA so you should be OK with translating as long as you follow the licence terms.

you could try do a first pass in an AI model to translate and then proof-read it for quicker translation. good luck, it would be fun and potentially impactful ;)


To my knowledge, sadly I can't find an English version of it. I'm too wishing for a future English version so that I can read it. But I guess it will be a lot of work to translate it into English.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: