> Y'know what else is not falsifiable? "That AI doesn't understand what it's doing".
Which is why people are saying we need to put in more work to define this term. Which is the whole point of this conversation.
> seriously considered quacks like a duck argument has been won by the AIs and no-one cares.
And have you ever considered that it's because people are refining their definitions?
Often when people find that their initial beliefs are wrong or not precise enough then they update their beliefs. You seem to be calling this a flaw. It's not like the definitions are dramatically changing, they're refining. There's a big difference
My first post here is me explaining that I have a non-standard definition of what ‘understanding’ means, which helps me avoid an apparently thorny issue. I’m literally here offering a refinement of a definition.
People are disagreeing with your refinement. The toaster example is exactly this.
Maybe what was interpreted is different than what you meant to convey, but certainly my interpretation was not unique. I'm willing to update my responses if you are willing to clarify but we'll need to work together on that. Because unfortunately just because the words make perfect sense to you doesn't mean they do to others.
I'll even argue that this is some of the importance of understanding. Or at least what we call understanding.
Often when people find that their initial beliefs are wrong or not precise enough then they update their beliefs. You seem to be calling this a flaw. It's not like the definitions are dramatically changing, they're refining. There's a big difference