Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wasted several hours on occasions where Claude would make changes to completely unrelated parts of the application instead of addressing my actual request.

Every time I read about people using AI I come away with one question. What if they spent hours with a pen and paper and brainstormed about their idea, and then turned it into an actual plan, and then did the plan? At the very least you wouldn't waste hours of your life and instead enjoy using your own powers of thought.



> What if they spent hours with a pen and paper and brainstormed about their idea, and then turned it into an actual plan, and then did the plan? At the very least you wouldn't waste hours of your life and instead enjoy using your own powers of thought.

OP here - I am a bit confused by this response. What are you trying to say or suggest here?

It's not like I didn't have a plan when making changes; I did, and when things went wrong, I tried to debug.

That said, if what you mean by having a plan (which again, I might not be understanding!) is write myself a product spec and then go build the site by learning to code or using a no/low code tool, I think that would have been arguably far less efficient and achieved a less ideal outcome.

In this case, I had Figma designs (from our product designer) that I wanted to implement, but I don't have the programming experience or knowledge of Remix as a framework to have been able to "just do it" on my own in a reasonable amount of time without pairing with Claude.

So while I had some frustrating hours of debugging, I still think overall I achieved an outcome (being able to build a site based on a detailed Figma design by pairing with Claude) that I would never have been able to achieve otherwise to that quality bar in that little amount of time.


Good point and it really makes you concerned for the branches your brain will go down when confronted with a problem.

I find my first branch more and more being `ask claude`. Having to actually think up organic solutions feels more and more annoying.


I had not thought of visualing my mental debugging process as a decision _tree_ and that LLMs (and talking to other humans) are analogous to a foreign graft. Interesting, thanks!


My assumption is that I’ll be using AI tools every day for the rest of my life.

I’d rather put hours in figuring out what works and what doesn’t to get more value out of my future use.


Never have FOMO when it comes to AI. When it's good enough to be a competitive advantage, everyone will catch up with you in weeks, if not days. All you are doing is learning to deal with the very flaws that have to be fixed for it to be worth anything.

Embrace that you aren't learning anything useful. Everything you are learning will be redundant in a year's time. Advice on how to make AI effective from 1 year ago is gibberish today. Today you've got special keyword like ultrathink or advice on when to compact context that will be gibberish in a year.

Use it, enjoy experimenting and seeing the potential! But no FOMO! There's a point when you need to realize it's not good enough yet, use the few useful bits, put the rest down, and get on with real work again.


I’m not sure if you meant to reply to my comment or someone else’s?

Why would I have FOMO? I am literally not missing out.

> All you are doing is learning to deal with the very flaws that have to be fixed for it to be worth anything.

No it is already worth something.

> Embrace that you aren't learning anything useful

No, I am learning useful things.

> There's a point when you need to realize it's not good enough yet

No, it’s good enough already.

Interesting perspective I guess.


> No, it's good enough already.

If it takes you hours to figure out what's working and what's not, then it isn't good enough. It should just work or it should be obvious when it won't work.


I mean that’s like saying doing normal coding or working on any project yourself isn’t good enough because you put in hours to figure out what works and doesn’t.

It’s just that you don’t like AI lol.


That analogy is off, because LLMs aren't a project I'm working on. They are a tool I can use to do that. And my expectation on tools is that they help me and not make things more complicated than they already are.

When LLMs ever reach that point I'll certainly hear about it and gladly use them. In the meantime I let the enthusiasts sort out the problems and glitches first.


No, the analogy is good. I’m not just opening up ChatGPT and slapping at the keyboard, there are projects I’m working on.

> And my expectation on tools is that they help me

LLMs do this for me. You just don’t seem to get the same benefit that I do.

> and not make things more complicated than they already are.

LLMs do not do this for me. Things are already complicated. Just because they’re still complicated with LLMs does not mean LLMs are bad.

> When LLMs ever reach that point I'll certainly hear about it and gladly use them

You are hearing about it now. You’re just not listening because you don’t like LLMs.


like, just use it for code that satisfies defined constrains and it kicks ass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: