> The entire premise of AI coding tools is that they automate the thinking, not just the typing. You're supposed to be able to describe a problem and get a solution without understanding the details.
This isn't accurate.
> So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works.
These things have planning modes - you can iterate on a plan all you want, make changes when ready, make changes one at a time etc. I don't know if the "pressure" is your own psychological block or you just haven't considered that you can use these tools differently.
Whether it feels satisfying or not - that's a personal thing, some people will like it, some won't. But what you're describing is just not using your tools correctly.
I think you're misunderstanding my point. I'm not saying I don't know how to use planning modes or iterate on solutions.
Yes, you still decompose problems. But what's the decomposition for? To create sub-problems small enough that the AI can solve them in one shot. That's literally what planning mode does - help you break things down into AI-solvable chunks.
You might say "that's not real thinking, that's just implementation details." Look who came up the the plan in the first place << It's the AI! Plan mode is partial automation of the thinking there too (improving every month)
Claude Code debugs something, it's automating a chain of reasoning: "This error message means execution reached this file. That implies this variable has this value. I can test this theory by sending this HTTP request. The logs show X, so my theory was wrong. Let me try Y instead."
> When I stop the production line to say "wait, let me understand what's happening here," the implicit response is: "Why are you holding up progress? It mostly works. Just ship it. It's not your code anyway."
This is not a technical problem or an AI problem, itβs a cultural problem where you work
We have the opposite - I expect all of our devs to understand and be responsible for AI generated code
To get it done correctly, that's always what it's been about.
I don't feel that code I write without assistance is mine, or some kind of achievement to be proud of, or something that inflates my own sense of how smart I am. So when some of the process is replaced by AI, there isn't anything in me that can be hurt by that, none of this is mine and it never was.
This isn't accurate.
> So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works.
These things have planning modes - you can iterate on a plan all you want, make changes when ready, make changes one at a time etc. I don't know if the "pressure" is your own psychological block or you just haven't considered that you can use these tools differently.
Whether it feels satisfying or not - that's a personal thing, some people will like it, some won't. But what you're describing is just not using your tools correctly.