> In 10 to 20 years, it might significantly contribute to environmental solutions, clarifying complex issues or providing effective methods to mitigate climate change.
Most of the issues are known. Paths to solve them have been known for decades if not more.
What's missing is the political and economical will to put them into action ASAP (because it's getting late now).
We're on the fringe of a +1.5°C world already. In less than a century, we will be most likely at +4°C, where most transport/living infrastructure and agriculture will have failed by then.
Whatever AI helps discover (new materials, new energies, new ways to extract energy or means of survival for, not only the human species, but most of the species still alive)...
How is AI going to put these solutions in motion in ways that we can't today? How is AI's "authority" going to efficiently supersede assemblies or governments?
If Earth's biotope is basically cooked in 75 years (in which, even datacenters in which AIs run won't be functional or safe) , how is AI anything close to a best bet for humanity?
While I was formulating my reply I asked myself: if we knew AI would only yield minor incremental improvements, would it be worth enduring the social upheaval caused by job losses and other stresses? Possibly not. Technologies should serve humanity by enabling greater cultural development, reducing suffering, and allowing us to achieve what otherwise would be impossible. The current level of AI, while helpful, doesn't fully achieve that.
So which is it? AI is useless or AI is good enough to start replacing humans in jobs?
AI is very far from «useless», but Antirez is not talking about AI: AI is "what can replace a professional".
LLMs are far from "useless", but they are very problematic. They are "tentative".
AI must go on, and LLMs must be fixed (protocollar content, lucid world model, dynamic world model, rational understanding, thought validation, iterative refinement of the world model...).
Beware, AntiRez. Without a proper Math background an a 'human' PhD in Math laying near you, any code you implement as algorithms can have slight mistakes throwing all your implementations to /dev/null. Because you have no ways to be sure of the output.
is stolen IP and shattered privacy our best bet for the future? or our best bet for a dystopia?
We carry our pathogenies untouched to a new era where they will be more impactful and more empowered with the AI feedback loop.
I'm disappointed when people throw things under the carpet because greed is more important and because oh shame is disguised as "growth" and "productivity".
Most of the issues are known. Paths to solve them have been known for decades if not more.
What's missing is the political and economical will to put them into action ASAP (because it's getting late now).
We're on the fringe of a +1.5°C world already. In less than a century, we will be most likely at +4°C, where most transport/living infrastructure and agriculture will have failed by then.
Whatever AI helps discover (new materials, new energies, new ways to extract energy or means of survival for, not only the human species, but most of the species still alive)...
How is AI going to put these solutions in motion in ways that we can't today? How is AI's "authority" going to efficiently supersede assemblies or governments?
If Earth's biotope is basically cooked in 75 years (in which, even datacenters in which AIs run won't be functional or safe) , how is AI anything close to a best bet for humanity?