Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Bro. Evolution is random walk. That means most of the changes are random and arbitrary based on whatever allows the squirrel to survive.

For the vast majority of evolutionary history, very similar forces have shaped us and squirrels. The mutations are random, but the selections are not.

If squirrels are a stretch for you, take the closest human relative: chimpanzees. There is a very reasonable hypothesis that their brains work very similarly to ours, far more similarly than ours to an LLM.

> So an LLM cant continuously learn? You realize that LLMs are deployed agentically all the time now so they both continuously learn and follow goals?

That is not continuous learning. The network does not retrain through that process. It's all in the agent's context. The agent has no intrinsic goals nor ability to develop them. It merely samples based on it's prior training and it's current content. It doesn't retrain through this process. Biological intelligence does retrain constantly.

> The energy efficiency is a byproduct of hardware. The theory of LLMs and machine learning is independent from the flawed silicon technology that is causing the energy efficiencies.

There is no evidence to support that a transformer model's inefficiency is hardware based.

There is direct evidence to support that the inefficiency is influenced by the fact that LLM inference and training are both auto-regressive. Auto-regression maps to compute cycles maps to energy consumption. That's a problem with the algorithm, not the hardware.

> The fact that training transformers on massive amounts of data produced this level of intelligence was a total surprise for all the experts

The level of intelligence produced is only impressive compared to the prior state of the art, and at its impressive modeling the narrow band of intelligence represented by encoded language (not all language) produced by humans. In most every other aspect of intelligence - notably continuous learning driven by intrinsic goals - LLMs fail.



>For the vast majority of evolutionary history, very similar forces have shaped us and squirrels. The mutations are random, but the selections are not.

Selection only filters for what survives. It doesn’t care how the system gets there. Evolution is blind to mechanism. A squirrel’s brain might work in a way that produces adaptive behavior, but that doesn’t mean its “understanding” of the world is like ours. We don’t even know what understanding is at a mechanistic level. Octopuses, birds, and humans all evolved under the same selective pressures for survival, yet ended up with completely different cognitive architectures. So to say a squirrel is “closer to us” than an LLM is an assumption built on vibes, not on data. We simply don’t know enough about either brains or models to make that kind of structural claim.

>The agent has no intrinsic goals nor ability to develop them.

That’s not accurate. Context itself is a form of learning. Every time an LLM runs, it integrates information, updates its internal state, and adjusts its behavior based on what it’s seen so far. That’s learning, just at a faster timescale and without weight updates. The line between “context” and “training” is blurrier than people realize. If you add memory, reinforcement, or continual fine tuning, it starts building continuity across sessions. Biologically speaking, that’s the same idea as working memory feeding into long term storage. The principle is identical even if the substrate differs. The fact that an LLM can change its behavior based on context already puts it in the domain of adaptive systems.

>There is no evidence to support that a transformer model’s inefficiency is hardware based.

That’s just not true. The energy gap is almost entirely about hardware architecture. A synapse stores and processes information in the same place. A GPU separates those two functions into memory, cache, and compute units, and then burns enormous energy moving data back and forth. The transformer math itself isn’t inherently inefficient; it’s the silicon implementation that’s clumsy. If you built an equivalent network on neuromorphic or memristive hardware, the efficiency difference would shrink by several orders of magnitude. Biology is proof that computation can be compact, low energy, and massively parallel. That’s a materials problem, not a theory problem.

>In most every other aspect of intelligence, notably continuous learning driven by intrinsic goals, LLMs fail.

They don’t “fail.” They’re simply different. LLMs are already rewriting how work gets done across entire industries. Doctors use them to summarize and interpret medical data. Programmers rely on them to generate and review code. Writers, lawyers, and analysts use them daily. If this were a dead end, it wouldn’t be replacing human labor at this scale. Are they perfect? No. But the direction of progress is unmistakable. Each new model closes the reliability gap while expanding capability. If you’re a software engineer and not using AI, you’re already behind, because the productivity multiplier is real.

What we’re seeing isn’t a dead end in intelligence. It’s the first time we’ve built a system that learns, generalizes, and communicates at human scale. That’s not failure; that’s the beginning of something we still don’t fully understand.


>> The agent has no intrinsic goals nor ability to develop them.

> That’s not accurate. Context itself is a form of learning. Every time an LLM runs, it integrates information, updates its internal state, and adjusts its behavior based on what it’s seen so far. That’s learning,

It may be learning, but it's still not an intrinsic goal, nor is it driven by an intrinsic goal.

> LLMs are already rewriting how work gets done across entire industries. Doctors use them to summarize and interpret medical data. Programmers rely on them to generate and review code. Writers, lawyers, and analysts use them daily. If this were a dead end, it wouldn’t be replacing human labor at this scale. Are they perfect?

Nowhere did I say that aren't useful or disruptive to labor markets, just that they aren't intelligent in the way we are.


>It may be learning, but it’s still not an intrinsic goal, nor is it driven by an intrinsic goal.

That depends on what we mean by “intrinsic.” In biology, goals are not mystical. They emerge from feedback systems that evolved to keep the organism alive. Hunger, curiosity, and reproduction are reinforcement loops encoded in chemistry. They feel intrinsic only because they are built into the substrate.

Seen that way, “intrinsic” is really about where the feedback loop closes. In humans, it closes through sensory input and neurochemistry. In artificial systems, it can close through memory, feedback, and reinforcement mechanisms. The system does not need to feel the goal for it to exist. It only needs to consistently pursue objectives based on input, context, and outcome. That is already happening in systems that learn from memory and update behavior over time. The process is different in form, but not in structure.

>Nowhere did I say that they aren’t useful or disruptive to labor markets, just that they aren’t intelligent in the way we are.

You are getting a bit off track here. Those examples were not about labor markets; they were about your earlier claim that “LLMs fail.” They clearly don’t. When models are diagnosing medical cases, writing production code, and reasoning across multiple domains, that is not failure. That is a demonstration of capability expanding in real time.

Your claim only holds if the status quo stays frozen. But it isn’t. The trendlines are moving fast, and every new model expands the range of what these systems can do with less supervision and more coherence. Intelligence is not a static definition tied to biology; it is a functional property of systems that can learn, adapt, and generalize. Whether that happens in neurons or silicon does not matter.

What we are witnessing is not imitation but convergence. Each generation of models moves closer to human-level reasoning not because they copy our brains, but because intelligence itself follows universal laws of feedback and optimization. Biology discovered one route. We discovered another. The trajectory is what matters, and the direction is unmistakable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: