My claim is more about that data processing is not enough. I was too vague and I definitely did not convey myself accurately. I tried to clarify a bit in a sibling comment to yours but I'm still unsure if it is sufficient tbh.
For embodiment, I think this is sufficient but not necessary. A key part to the limitation is that the agent cannot interact with its environment. This is a necessary feature for distinguishing competing explanations. I believe we are actually in agreement here, but I do think we need to be careful how we define embodiment. Because even a toaster can be considered a robot. It seems hard to determine what does not qualify as a body when we get to the itty gritty. But I think in general when people are talking about embodiment they are discussing the capability of being interventional.
By your elaboration I believe we agree since part of what I believe to be necessary is the ability to self-analyze (meta-cognition) to determine low density regions of its model and then to be able to seek out and rectify this (intervention). Data processing is not sufficient for either of those conditions.
Your prompt is, imo, more about world modeling, though I do think this is related. I asked Claude Sonnet 4.5 with extended thinking enabled and it also placed itself outside the room. Opus 4.1 (again with extended thinking), got the answer right. (I don't use a standard system prompt, though that is mostly to make it not syncopathic and to try to get it to ask questions when uncertain and enforce step by step thinking)
From the perspective of the person in the room, your right arm would be on their right side as you walk out.
Here's why: When you initially walk into the room facing the person, your right arm appears on their left side (since you're facing each other). But when you turn around 180 degrees to walk back out, your back is now toward them. Your right arm stays on your right side, but from their perspective it has shifted to their right side.
Think of it this way - when two people face each other, their right sides are on opposite sides. But when one person turns their back, both people's right sides are now on the same side.
The CoT output is a bit more interesting[0]. Disabling my system prompt gives an almost identical answer fwiw. But Sonnet got it right. I repeated the test in incognito after deleting the previous prompts and it continued to get it right, independent of my system prompt or extended thinking.
I don't think this proves a world model though. Misses are more important than hits, just as counter examples are more important than examples in any evidence or proof setting. But fwiw I also frequently ask these models variations on river crossing problems and the results are very shabby. A few appear spoiled now but they are not very robust to variation and that I think is critical.
I think an interesting variation of your puzzle is as follows
Imagine you walked into a room through a doorway. Then you immediately turn around and walk back out of the room.
From the perspective of a person in the room, facing the door, which side would your right arm be? Please explain.
I think Claude (Sonnet) shows some subtle but important results in how it answers
Your right arm would be on their right side.
When you turn around to walk back out, you're facing the same direction as the person in the room (both facing the door). Since you're both oriented the same way, your right side and their right side are on the same side.
This makes me suspect there's some overfitting. CoT correctly uses "I"[1].
It definitely isn't robust to red herrings[2], and I think that's a kicker here. It is similar to failure results I see in any of these puzzles. They are quite easy to break with small variations. And we do need to remember that these are models trained on the entire internet (including HN comments), so we can't presume this is a unique puzzle.
For embodiment, I think this is sufficient but not necessary. A key part to the limitation is that the agent cannot interact with its environment. This is a necessary feature for distinguishing competing explanations. I believe we are actually in agreement here, but I do think we need to be careful how we define embodiment. Because even a toaster can be considered a robot. It seems hard to determine what does not qualify as a body when we get to the itty gritty. But I think in general when people are talking about embodiment they are discussing the capability of being interventional.
By your elaboration I believe we agree since part of what I believe to be necessary is the ability to self-analyze (meta-cognition) to determine low density regions of its model and then to be able to seek out and rectify this (intervention). Data processing is not sufficient for either of those conditions.
Your prompt is, imo, more about world modeling, though I do think this is related. I asked Claude Sonnet 4.5 with extended thinking enabled and it also placed itself outside the room. Opus 4.1 (again with extended thinking), got the answer right. (I don't use a standard system prompt, though that is mostly to make it not syncopathic and to try to get it to ask questions when uncertain and enforce step by step thinking)
The CoT output is a bit more interesting[0]. Disabling my system prompt gives an almost identical answer fwiw. But Sonnet got it right. I repeated the test in incognito after deleting the previous prompts and it continued to get it right, independent of my system prompt or extended thinking.I don't think this proves a world model though. Misses are more important than hits, just as counter examples are more important than examples in any evidence or proof setting. But fwiw I also frequently ask these models variations on river crossing problems and the results are very shabby. A few appear spoiled now but they are not very robust to variation and that I think is critical.
I think an interesting variation of your puzzle is as follows
I think Claude (Sonnet) shows some subtle but important results in how it answers This makes me suspect there's some overfitting. CoT correctly uses "I"[1].It definitely isn't robust to red herrings[2], and I think that's a kicker here. It is similar to failure results I see in any of these puzzles. They are quite easy to break with small variations. And we do need to remember that these are models trained on the entire internet (including HN comments), so we can't presume this is a unique puzzle.
[0] http://0x0.st/K158.txt
[1] http://0x0.st/K15T.txt
[2] http://0x0.st/K15m.txt