Hacker Newsnew | past | comments | ask | show | jobs | submit | neonspark's commentslogin

> Debug builds should by default give .unwrap() and .expect() a tiny chance, like 0.1%, to trigger anyway, even when the Option is Some (opt out via configuration).

I'm trying to understand what you're proposing. Are you saying that normal debug builds should have artificial failures in them, or that there should be a special mode that tests these artificial failures?

Because some of these failures could cause errors to be shown to the user, that could be really confusing when testing a debug build.


I guess they are advocating for exhaustive branch testing.


I think what you say is true, and I think that this is exactly true for humans as well. There is no known way to completely eliminate unintentional bullshit coming from a human’s mouth. We have many techniques for reducing it, including critical thinking, but we are all susceptible to it and I imagine we do it many times a day without too much concern.

We need to make these models much much better, but it’s going to be quite difficult to reduce the levels to even human levels. And the BS will always be there with us. I suppose BS is the natural side effect of any complex system, artificial or biological, that tries to navigate the problem space of reality and speak on it. These systems, sometimes called “minds”, are going to produce things that sound right but just are not true.


It's a feeling I can't escape: that by trying to build thinking machines, we glimpse more and more of how the human mind works, and why it works the way it does - imperfections and all.

"Critical thinking" and "scientific method" feel quite similar to the "let's think step by step" prompt for the early LLMs. More elaborate directions, compensating for the more subtle flaws of a more capable mind.


> But another cause of hallucinations is limited self-awareness of modern LLMs… Humans have some awareness of the limits of their knowledge

Until you said that I didn’t realize just how much humans “hallucinate“ in just the same ways that AI does. I have a friend who is fluent in Spanish, a native speaker, but got a pretty weak grammar education when he was in high school. Also, he got no education at all in Critical thinking, at least not formally. So this guy is really, really fluent in his native language, but can often have a very difficult time explaining why he uses whatever grammar he uses. I think the whole world is realizing how little our brains can correctly explain and identify the grammar we use flawlessly.

He helps me to improve my Spanish a lot, he can correct me with 100% accuracy of course, but I’ve noticed on many occasions, including this week, that when I ask a question about why he said something one way or another in Spanish, he will just make up some grammar rule that doesn’t actually exist, and is in fact not true.

He said something like “you say it this way when you really know the person and you’re saying that the other way when it’s more formal“, but I think really it was just a slangy way to mis-stress something and it didn’t have to do with familiar/formal or not. I’ve learned not to challenge him on any of these grammar rules that he makes up, because he will dig his heels in, and I’ve learned just to ignore him because he won’t have remembered this made up grammar rule in a week anyway.

This really feels like a very tight analogy with what my LLM does to me every day, except that when I challenge the LLM it will profusely apologize and declare itself incorrect even if it had been correct after all. Maybe LLMs are a little bit too humble.

I imagine this is a very natural tendency in humans, and I imagine I do it much more than I’m aware of. So how do humans use self-awareness to reduce the odds of this happening?

I think we mostly get trained in higher education to not trust the first thought that comes into our head, even if it feels self consistent and correct. We eventually learn to say “I don’t know” even if it’s about something that we are very, very good at.


Spanish in particular has more connotations per word than English. It's not even the grammar or spelling, those have rules and that's that. But choosing appropriate words, is more like every word has it right place and time and context. Some close examples would be the N-word or the R-word in English, as they are steeped in meanings far beyond the literal.


He said something like “you say > it this way when you really know the person and you’re saying that the other way when it’s more formal“, but I think really it was just a slangy way to mis-stress something and it didn’t have to do with familiar/formal or not.

There’s such a thing in Spanish and in French. Formal and informal settings is reflected in the language. French even distinguishes between three different level of vocabulary (one for very informal settings (close friends), one for business and daily interactions, and one for very formal settings. It’s all cultural.


> Seeing which parts of a model (they aren't neurons)…

I thought models were composed of neural network layers, among other things. Are these data structures called something different?


That point may not have been relevant for me to include.

I was getting at the idea that a neuron is a very specific feature of a biological brain, regardless of what AI researchers may call their hardware they aren't made of neurons.


1. They are neurons, whether you like it or not. A binary tree may not have squirrels living in them, but it's still a tree, even though the word "tree" here is defined differently than from biology. Or are you going to say a binary tree is not a tree?

2. You are about 5 years behind in terms of the research. Look into hierarchical feature representation and how MLP neurons work. (Or even in older CNNs and RNNs etc). And I'm willingly using the word "neuron" instead of "feature" here because while I know "feature" is more correct in general, there are definitely small toy models where you can pinpoint an individual neuron to represent a feature such as a face.


What were you getting at with the MLP example? MLPs do a great job with perception abilities and I get that they use the term neuron frequently. I disagree with the use of the name there that's all, similarly I disagree that LLMs are AI but here we are.

Using the term neuron there and meaning it literally is like calling an airplane a bird. I get that the colloquial use exists, but no one thinks they are literal birds.


Do you also disagree with the use of the name “tree” in a computer science class?

Again, nobody thinks trees in computer science contains squirrels, nobody thinks airplanes are birds, and nobody thinks a neuron in a ML model contains axons and dendrites. This is a weird hill to die on.

Are you gonna complain that the word “photograph” is “light writing” but in reality nobody is writing anything so therefore the word is wrong?


I would disagree with anyone that wants to say they are the same as a natural tree, sure.

I don't believe the term photograph was repurposed when cameras were invented, that example doesn't fit.

More importantly, I argued that neuron has a very specific biological meaning and its a misuse to use the term for what is ultimately running on silicon.

Your claim was that they are neurons, period. You didn't expand on that further which reads as a pretty literal use of the term to me. We're online discussing in text, that reading of your comment could be completely wrong, that's fine. But I stand by my point that what is inside an LLM or a GPU is not a neuron.



What's your point with that link? I'm well aware that people use the term neuron in AI research and acknowledged it a few comments up. I disagree with the use of term, I'm not arguing that the term isn't used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: