Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree, I see the same even in simple code where they will bend backwards apologizing and generate very similar crap.

It is like they are sometimes stuck in a local energetic minimum and will just wobble around various similar (and incorrect) answers.

What was annoying in my attempt above is that the picture was identical for every attempt



These tools 'attitude' reminds me of an eager, but incompetent intern or a poorly trained administrative assistant, who works for a powerful CEO. All sycophancy, confidence and positive energy, but not really getting much done.


The issue is the they always say "Here's the final, correct answer" before they've written the answer, so of course the LLM has no idea if it's going to be right before it starts, because it has no clue what it's going to say.

I wonder how it would do if instead it were told "Do not tell me at the start that the solution is going to be correct. Instead, tell me the solution, and at the end tell me if you think it's correct or not."

I have found that on certain logic puzzles that it simply cannot get right, it always tells me that it's going to get it quite "this last time," but if asked later it always recognizes its errors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: