> I wasted several hours on occasions where Claude would make changes to completely unrelated parts of the application instead of addressing my actual request.
Every time I read about people using AI I come away with one question. What if they spent hours with a pen and paper and brainstormed about their idea, and then turned it into an actual plan, and then did the plan? At the very least you wouldn't waste hours of your life and instead enjoy using your own powers of thought.
> What if they spent hours with a pen and paper and brainstormed about their idea, and then turned it into an actual plan, and then did the plan? At the very least you wouldn't waste hours of your life and instead enjoy using your own powers of thought.
OP here - I am a bit confused by this response. What are you trying to say or suggest here?
It's not like I didn't have a plan when making changes; I did, and when things went wrong, I tried to debug.
That said, if what you mean by having a plan (which again, I might not be understanding!) is write myself a product spec and then go build the site by learning to code or using a no/low code tool, I think that would have been arguably far less efficient and achieved a less ideal outcome.
In this case, I had Figma designs (from our product designer) that I wanted to implement, but I don't have the programming experience or knowledge of Remix as a framework to have been able to "just do it" on my own in a reasonable amount of time without pairing with Claude.
So while I had some frustrating hours of debugging, I still think overall I achieved an outcome (being able to build a site based on a detailed Figma design by pairing with Claude) that I would never have been able to achieve otherwise to that quality bar in that little amount of time.
I had not thought of visualing my mental debugging process as a decision _tree_ and that LLMs (and talking to other humans) are analogous to a foreign graft. Interesting, thanks!
Never have FOMO when it comes to AI. When it's good enough to be a competitive advantage, everyone will catch up with you in weeks, if not days. All you are doing is learning to deal with the very flaws that have to be fixed for it to be worth anything.
Embrace that you aren't learning anything useful. Everything you are learning will be redundant in a year's time. Advice on how to make AI effective from 1 year ago is gibberish today. Today you've got special keyword like ultrathink or advice on when to compact context that will be gibberish in a year.
Use it, enjoy experimenting and seeing the potential! But no FOMO! There's a point when you need to realize it's not good enough yet, use the few useful bits, put the rest down, and get on with real work again.
If it takes you hours to figure out what's working and what's not, then it isn't good enough. It should just work or it should be obvious when it won't work.
I mean that’s like saying doing normal coding or working on any project yourself isn’t good enough because you put in hours to figure out what works and doesn’t.
That analogy is off, because LLMs aren't a project I'm working on. They are a tool I can use to do that. And my expectation on tools is that they help me and not make things more complicated than they already are.
When LLMs ever reach that point I'll certainly hear about it and gladly use them. In the meantime I let the enthusiasts sort out the problems and glitches first.
Every time I read about people using AI I come away with one question. What if they spent hours with a pen and paper and brainstormed about their idea, and then turned it into an actual plan, and then did the plan? At the very least you wouldn't waste hours of your life and instead enjoy using your own powers of thought.