I wouldn't take it for granted that Claude isn't re-reading your entire context each time it runs.
When you run llama.cpp on your home computer, it holds onto the key-value cache from previous runs in memory. Presumably Claude does something analogous, though on a much larger scale. Maybe Claude holds onto that key-value cache indefinitely, but my naive expectation would be that it only holds onto it for however long it expects you to keep the context going. If you walk away from your computer and resume the context the next day, I'd expect Claude to re-read your entire context all over again.
At best, you're getting some performance benefit keeping this context going, but you are subjecting yourself to context rot.
Someone familiar with running Claude or industrial-strength SOTA models might have more insight.
CC absolutely does not read the context again during each run. For example, if you ask it to do something, then revert its changes, it will think the changes are still there leading to bad times.
It wouldn't re-read the context, it would cache tokens thus far which is like photographically remembering the context instead of re-reading it, until you see it "compress" context when it gives itself a prompt to recap so far:
You can tell it that you manually reverted the changes.
That said, the fact that we're all curating these random bits of "llm whisperer" lore is...concerning. The product is at the same time amazingly good and terribly bad.
When you run llama.cpp on your home computer, it holds onto the key-value cache from previous runs in memory. Presumably Claude does something analogous, though on a much larger scale. Maybe Claude holds onto that key-value cache indefinitely, but my naive expectation would be that it only holds onto it for however long it expects you to keep the context going. If you walk away from your computer and resume the context the next day, I'd expect Claude to re-read your entire context all over again.
At best, you're getting some performance benefit keeping this context going, but you are subjecting yourself to context rot.
Someone familiar with running Claude or industrial-strength SOTA models might have more insight.