Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's hard because it's quite artistic and individualistic, as silly as that may sound.

I've built "large projects" with AI, which is 10k-30k lines of algorithmic code and 50k-100k+ lines of UI/Interface.

I've found a few things to be true (that aren't true for everyone).

1. The choice of model (strengths and weaknesses) and OS, dramatically affect how you must approach problems.

2. Being a skilled programmer/engineer yourself will allow you to slice things along areas of responsibility, domains, or other directions that make sense (for code size, context preservation, and being able to wrap your head around it).

3. For anything where you have a doubt, ask 3 or more models -- have them write their findings down in a file each -- and then have 3 models review the findings with respect to the code. More often than not, you march towards consensus and a good solution.

4. GPT-5-Codex via OpenAI Codex CLi on Linux/WSL was, for me, the most capable model for coding while Claude is the most capable for quick fixes and UI.

5. Tooling and ways to measure "success" are imperative. If you can't define the task in a way that success is easy to define -- neither a human nor AI would complete it satisfactorily. You'll find that most engineer tasks are laid out in very "hand-wavy" way -- particularly UI tasks. Either lay it out cleanly or expect to iterate.

6. AI does not understand the physical/visual world. It will fail hard on things which have an implied understanding. For instance, it will not automatically intuit the implication of 50 parallel threads trying to read from an SSD -- unless you guide it. Ditto for many other optimizations and usage patterns where code meets real-world. These will often be unique and interesting bugs or performance areas that a good engineer would know straight out.

7. It's useful to have non-agentic tools that can perform massive codebase analysis for tough problems. Even at 400k tokens context, a large codebase can quickly become unwieldy. I have built custom python tools (pretty easy) to do things like "get all files of a type recursively and generate a context document that will submit with my query". You then query GPT-5-high, Claude Opus, Gemini 2.5 Pro and cross-check.

8. Make judicious use of GIT. The pattern doesn't matter, just have one. My pattern is commit after every working agentic run (let's say feature). If it's a fail and taking more than a few turns to get working -- I scrap the whole thing and re-assess my query or how I might approach or break down the task.

9. It's up to you to guide the agent on the most thoughtful approaches -- this is the human aspect. If you're using Cloud Provider X and they provide cheap queues then it's on you to guide your agent to use queues for the solution rather than let's say a SQL db -- and it's on you to understand the tradeoffs. AI will perhaps help explain them but it will never truly understand your business case and requirements for reliability, redundancy, etc. Perhaps you can craft queries for this but this is an area where AI meets real world and those tend to fail.

One more thing I'd add is that you should make an attempt to fix bugs in your 'new' codebase on occasion. You'll get an understanding for how things work and also how maintainable it truly is. You'll also keep your own troubleshooting skills from atrophying.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: