In the same way it's better that adults are the recipients of the harms of smoking, drinking or gambling. It's still not desirable, but societies have settled upon thresholds for when people have some capacity to take responsibility for their choices.
Not saying those thresholds are always right and should definitely apply in this case, but it surely isn't an alien or non-obvious concept.
Others have suggested "bullshit". A bullshitter does not care (and may not know) whether what they say is truth or fiction. A bullshitter's goal is just to be listened to and seem convincing.
Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?
Doublespeak? Try to speak in such a legalese way that although things are technically true, yet still those words are crafted in such a way to influence others...
So in this case of our capitalist system, doublespeak exists in such a way because they can create money or not lose money by doing such doublespeak as saying to investors this as a destroyed would make them have a negative connotation with amazon itself which would reduce their stock price.
Everything is done for the stock price. Everything. The world is a little addicted on those shareholders returns that we can change our language because of it.
You can't "deceive" an LLM. It's not like lying to a person. It's not a person.
Using emotive, anthropomorphic language about software tool is unhelpful, in this case at least. Better to think of it as a mentally disturbed minor who found a way to work around a tool's safety features.
We can debate whether the safety features are sufficient, whether it is possible to completely protect a user intent on harming themselves, whether the tool should be provided to children, etc.
I don't think deception requires the other side to be sentient. You can deceive a speed camera.
And while meriam-webster's definition is "the act of causing someone to accept as true or valid what is false or invalid", which might exclude LLMs, Oxford simply defines deception as "the act of hiding the truth, especially to get an advantage", no requirement that the deceived is sentient
Mayyybe, but since the comment I objected to also used an analogy of lying to a person I felt it suggested some unwanted moral judgement (of a suicidal teenager).
I mean, for one thing, a commercial LLM exists as a product designed to make a profit. It can be improved, otherwise modified, restricted or legally terminated.
And "lying" to it is not morally equivalent to lying to a human.
> And "lying" to it is not morally equivalent to lying to a human.
I never claimed as much.
This is probably a problem of definitions: To you, "lying" seems to require the entity being lied to being a moral subject.
I'd argue that it's enough for it to have some theory of mind (i.e. be capable of modeling "who knows/believes what" with at least some fidelity), and for the liar to intentionally obscure their true mental state from it.
I agree with you, and i would add that morals are not objective but rather subjective, which you alluded to by identifying a moral subject. Therefore, if you believe that lying is immoral, it does not matter if you're lying to another person, yourself, or to an inanimate object.
So for me, it's not about being reductionist, but about not anthropomorphizing or using words which which may suggest an inappropriate ethical or moral dimension to interactions with a piece of software.
I'm the last to stand in the way of more precise terminology! Any ideas for "lying to a moral non-entity"? :)
“Lying” traditionally requires only belief capacity on the receiver’s side, not qualia/subjective experiences. In other words, it makes sense to talk about lying even to p-zombies.
I think it does make sense to attribute some belief capacity to (the entity role-played by) an advanced LLM.
I think just be specific - a suicidal sixteen year-old was able to discuss methods of killing himself with an LLM by prompting it to role-play a fictional scenario.
No need to say he "lied" and then use an analogy of him lying to a human being, as did the comment I originally objected to.
Not from the perspective of "harm to those lied to", no. But from the perspective of "what the liar can expect as a consequence".
I can lie to a McDonalds cashier about what food I want, or I can lie to a kiosk.. but in either circumstance I'll wind up being served the food that I asked for and didn't want, won't I?
the whoosh is that they are describing the human operator, a "mentally disturbed minor" and not the LLM. the human has the agency and specifically bypassed the guardrails
To treat the machine as a machine: it's like complaining that cars are dangerous because someone deliberately drove into a concrete wall. Misusing a product with the specific intent of causing yourself harm doesn't necessarily remove all liability from the manufacturer, but it radically changes the burden of responsibility.
Another is that this is a new and poorly understood (by the public at least) technology that giant corporations make available to minors. In ChatGPT's case, they require parental consent, although I have no idea how well they enforce that.
But I also don't think the manufacturer is solely responsible, and to be honest I'm not that interested in assigning blame, just keen that lessons are learned.
The historical conception of race doesn't translate simply to human genetics.
There is more genetic variation within a what we might call a race than between them. And it's interesting to note that the genetic diversity of Eurasian populations is in large part contained within the much greater diversity of Africa. In some sense we're all Africans. On top of that there has been a good amount of mixing, both historically and in the present day.
To think in terms of "races" might lead one to hold a mental model of impermeable boundaries between populations that in many cases were never present, and certainly aren't today. Geneticists tend to use the word "population" instead, since it doesn't connote any unhelpful assumptions about uniform or fixed phenotypes within a well-defined species subgroup.
Of course there are certain obvious environmental adaptations that have been selected for in different geographies/climates, as well random genetic drift between distant populations. Sometimes those difference might have medical relevance, and you can make statistical generalisations about the prevalence and distribution of genetic markers within any group you like. But for most medical and public policy applications it is likely most useful to focus on populations within an administrative area, and increasingly, individualised medicine.
Something similar happened with me for another theory's article. I knew it existed before the article said. But I only had a primary source to prove it. Since it's not a secondary source I couldn't fix the article, so I put it on the talk page. Now the talk page has been wiped and the article is still wrong about the origin.
Happens frequently. Usually the Wikipedia article is written by the guy who wants to claim credit for an existing design or idea, so he zealously guards the page against the truth. It's just disappointing.
reply