I don't think we can ever know that we are generally intelligent. We can be unsure, or we can meet something else which possesses a type of intelligence that we don't, and then we'll know that our intelligence is specific and not general.
So to make predictions about general intelligence is just crazy.
And yeah yeah I know that OpenAI defines it as the ability to do all economically relevant tasks, but that's an awful definition. Whoever came up with that one has had their imagination damaged by greed.
My point was that all intelligence is based on an individual's experiences, therefore an individual's intelligence is specific to those experiences.
Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.
So if you encounter an unknown intelligence, like I dunno some kind of extra dimensional pen pal with a wildly different biology and environment than our own... Would you be open to the possibilities:
- despite our difference we have the same kind of intelligence
- our intelligences intersect, but there are capacities that each has that the other doesn't
?
It seems like for either to be true there would have to be some place of common ground into which we could both generalize independently of our circumstance. Mathematics is often thought to be such a place for instance, there's plenty of sci fi about beaming prime numbers into space as an attempt to leverage that common ground. Are you saying there aren't such places? That SETI is hopeless?
> Even when we "generalize" our intelligence, we can only extend it within the realm of human senses & concepts, so it's still intelligence specific to human concerns.
...then we might fail to recognized them as intelligent when we meet them. Same goes for emergent artificial doohickeys. A theory that allows for generalization might never fine an example of it, but it's still better than a theory which doesn't because the second sort surely won't.
When you make the term "general intelligence" so broad that it expands beyond the realm of human senses & concepts, statements about it become unfalsifiable because you, a human, can't conceive of a way to test said statement.
Unfalsifiable statements are worthless because they can't be tested.
So, at the very least, there's no point in humans trying to theorize about intelligence so general that it expands beyond human comprehension.
Basically, in the context of universal intelligence, I'm an atheist & you're agnostic.
Or: just try, then try your best to find ways your definition fails. You should find it challenging, to put it mildly, to create a bulletproof definition, if you’re really looking for angles to attack each definition you can think of. They’ll end up being too broad, or too narrow. Or coming up short on defining when exactly a non-chair becomes a chair, and vice-versa, or what the boundaries of a chair are (where chairness begins and ends).
So to make predictions about general intelligence is just crazy.
And yeah yeah I know that OpenAI defines it as the ability to do all economically relevant tasks, but that's an awful definition. Whoever came up with that one has had their imagination damaged by greed.