> Turing computability is tangential to his claim, as LLMs are obviously not carrying out the breadth of all computable concepts
They don't need to. To be Turing complete a system including an LLM need to be able to simulate a 2-state 3-symbol Turing machine (or the inverse). Any LLM with a loop can satisfy that.
If you think Turing computability is tangential to this claim, you don't understand the implications of Turing computability.
> His claim can be trivially proven by considering the history of humanity.
Then show me a single example where humans demonstrably exceeding the Turing computable.
We don't even know any way for that to be possible.
"To be Turing complete a system including an LLM need to be able to simulate a 2-state 3-symbol Turing machine (or the inverse)."
And infinite memory. You forgot the infinite memory. And LLMs are extremely inefficient with memory. I'm not talking about the memory needed in the GPU to store the weights, but rather the ability of an LLM to remember whatever it's working on at the moment.
What could be stored as a couple of bits in a temporary variable is usually output as "Step 3: In the previous step we frobbed the junxer and got junx, and if you do junx + flibbity you get floopity"
And remember that this takes up a bunch of tokens. Without doing this (whether the LLM provider decides to let you see it or not, but still bill you for it), an LLM can't possibly execute an algorithm that requires iteration in the general case. For a more rigorous example, check apple's paper where an LLM failed to solve a tower of hanoi problem even when it had the exact algorithm to do so in context (apart from small instances of the problem for which the solution is available countless times).
This is akin to claiming that a tic-tac-toe game is turing complete since after all we could simply just modify it to make it not a tic tac toe game. It's not exactly a clever argument.
And again there are endless things that seem to reasonably defy turing computability except when you assume your own conclusion. Going from nothing, not even language, to richly communicating, inventing things with no logical basis for such, and so is difficult to even conceive as a computable process unless again you simply assume that it must be computable. For a more common example that rapidly enters into the domain of philosophy - there is the nature of consciousness.
It's impossible to prove that such is Turing computable because you can't even prove consciousness exists. The only way I know it exists is because I'm most certainly conscious, and I assume you are too, but you can never prove that to me, anymore than I could ever prove I'm conscious to you. And so now we enter into the domain of trying to computationally imagine something which you can't even prove exists, it's all just a complete nonstarter.
-----
I'd also add here that I think the current consensus among those in AI is implicit agreement with this issue. If we genuinely wanted AGI it would make vastly more sense to start from as little as possible because it'd ostensibly reduce computational and other requirements by many orders of magnitude, and we could likely also help create a more controllable and less biased model by starting from a bare minimum of first principles. And there's potentially trillions of dollars for anybody that could achieve this. Instead, we get everything dumped into token prediction algorithms which are inherently limited in potential.
> This is akin to claiming that a tic-tac-toe game is turing complete since after all we could simply just modify it to make it not a tic tac toe game. It's not exactly a clever argument.
No, it is nowhere remotely like that. It is claiming that a machine capable of running a Turing machine is in fact capable of running any other Turing machine. In other words, it is pointing out the principle of Turing equivalence.
> And again there are endless things that seem to reasonably defy turing computability
Show us one. We have no evidence of any single one.
> It's impossible to prove that such is Turing computable because you can't even prove consciousness exists.
Unless you can show that humans exceeds the Turing computable, "consciousness" however you define it is either possible purely with a Turing complete system or can not affect the outputs of such a system. In either case this argument is irrelevant unless you can show evidence we exceed the Turing computable.
> I'd also add here that I think the current consensus among those in AI is implicit agreement with this issue. If we genuinely wanted AGI it would make vastly more sense to start from as little as possible because it'd ostensibly reduce computational and other requirements by many orders of magnitude, and we could likely also help create a more controllable and less biased model by starting from a bare minimum of first principles. And there's potentially trillions of dollars for anybody that could achieve this. Instead, we get everything dumped into token prediction algorithms which are inherently limited in potential.
This is fundamentally failing to engage with the argument. There is nothing in the argument that tells us anything about the complexity of a solution to AGI.
LLMs are not capable of simulating turing machines - their output is inherently and inescapably probabilistic. You would need to fundamentally rewrite one to make this possible, at which point it is no longer an LLM.
And as I stated, you are assuming your own conclusion to debate the issue. You believe that nothing is incomputable, and are tying that assumption into your argument as an assumption. It's not on me to prove your assumption is wrong, it's on you to prove that it's correct - proving a negative is impossible. E.g. - I'm going to assume that there is an invisible green massless goblin on your shoulder named Kyzirgurankl. Prove me wrong. Can you give me even the slightest bit of evidence against it? Of course you cannot, yet absence of evidence is not evidence of absence, so the burden of my claim rests on me.
And so now feel free to prove that consciousness is computable, or even replicating humanity's successes from a comparable baseline. Without that proof you must understand that you're not making some falsifiable claim of fact, but simply appealing to your own personal ideology or philosophy, which is of course completely fine (and even a good thing), but also a completely subjective opinion on matters.
After having read your comment, I feel I should have left my comment under this thread. I will just refer to it instead: https://news.ycombinator.com/item?id=46003870. This was my reply to your parent. I agree with you.
> LLMs are not capable of simulating turing machines - their output is inherently and inescapably probabilistic.
This is fundamentally not true. Inference code written to be numerically stable and temperature set to 0 is all you need for an LLM to be entirely deterministic.
> And as I stated, you are assuming your own conclusion to debate the issue. You believe that nothing is incomputable, and are tying that assumption into your argument as an assumption.
This is categorically also false. Please do not make a position for me that I have at no point in my life claimed. I believe plenty of things are incomputable. That is provable the case. What I have repeatedly said is that we have no evidence to show 1) that there are computable functions that exceed the Turing computable, 2) that the brain are capable of computing such functions that exceeds the Turing computable.
If you have evidence of either of those two, please do feel free to provide it - it would be earth-shattering news. It'd revolutionise physics, as it'd involve unknown interactions, it'd revolutionise maths and computer science by forcing us to throw out areas of theory of computation.
> It's not on me to prove your assumption is wrong, it's on you to prove that it's correct - proving a negative is impossible.
That the Turing computable set of functions is the totality of computable functions is not a claim I've come up with.
If you want to make the extraordinary claim that there are computable functions outside that, despite no extant evidence, then since you've invoked a weird version of Russels teapot, that requires extraordinary proof.
And it is not impossible: A single example of a computable function outside the Turing computable would falsify the underlying claim. A single example of humans being able to compute such a function would falsify the claim that Turing equivalence has relevance here.
I've been very careful throughout to make clear that my arguments hinges on humans being unable to exceed the Turing computable.
I don't believe we should talk in absolutes when we can't prove it, hence my challenge to the people here who are so absolutely certain about the limitations of LLMs to show just a single example of humans exceeding the Turing computable.
Because you are so certain, the surely there lies something behind that certainty other than blind faith?
> And so now feel free to prove that consciousness is computable
At no point have I made claims about "consciousness". Before I'd do that, you'd need to define in an objective way what you mean. It is an entirely separate question from the ones I've addressed.
> Without that proof you must understand that you're not making some falsifiable claim of fact
As noted, my claims are falsifiable: Show a single example of a function that exceeds the Turing computable, that humans can compute.
Setting the temperature to 0 doesn't make an LLM non-probabilistic. Once again, LLMs are inherently probabilistic. All setting the temperature to 0 does is make it always choose the highest probability token instead of using a weighted randomization. You'll still get endless hallucinations and the same inherent limitations, including the inability to reliably simulate a turing machine.
As for the rest of your post, you are again, consciously or not, trying to say 'give me a calculable function that isn't a calculable function.' I obviously agree that the idea of trying to 'calculate' consciousness is essentially a non-starter. That's precisely the point.
They don't need to. To be Turing complete a system including an LLM need to be able to simulate a 2-state 3-symbol Turing machine (or the inverse). Any LLM with a loop can satisfy that.
If you think Turing computability is tangential to this claim, you don't understand the implications of Turing computability.
> His claim can be trivially proven by considering the history of humanity.
Then show me a single example where humans demonstrably exceeding the Turing computable.
We don't even know any way for that to be possible.