That's because AI allows poor programmers to appear as good programmers, which is actually a good thing as otherwise they'd be writing crap you'd have to code-review, but their understanding of what is good code is poor, so you're back to having to vet it all anyway. At least you can us AI for that. Except you can't, without vetting it.
I literally just today watched my entire team descend into "Release Hell" where an obscure bug in business logic already delivered to thousands of customers broke right as we were about to ship a release. Obscure bug, huge impact on the customer, as they actually ended up charging people more than they should have. The team-members, and yes, not leads, used AI to write that bug and then tried to prompt their way out of the bug. It turned into a giant game of whack-a-mole as other business logic had errors introduced that thankfully got caught by tests. Then it was discovered that they never understood the code, they could only maintain it with prompts.
Let that sink in. They don't understand what they're doing, they just massage the spec into prompts and when it appears to work and pass tests they call it good.
We looked at the prompts. They were insane. They actually just kept adding more specification to the end, but if you read through it all it had contradictory logic, which I would have hoped the AI would have pointed out, but nope. It was actually just easier for me and another senior to rewrite the logic as pseudo-code, cut the size down by literally 3/4, and eventually got it all working as expected.
So that's the future, girls and boys. People putting together code they don't understand with AI, and can only maintain with AI, and then not being able to fix with AI because they cannot prompt accurately enough because English sucks at being precise.
I literally just today watched my entire team descend into "Release Hell" where an obscure bug in business logic already delivered to thousands of customers broke right as we were about to ship a release. Obscure bug, huge impact on the customer, as they actually ended up charging people more than they should have. The team-members, and yes, not leads, used AI to write that bug and then tried to prompt their way out of the bug. It turned into a giant game of whack-a-mole as other business logic had errors introduced that thankfully got caught by tests. Then it was discovered that they never understood the code, they could only maintain it with prompts.
Let that sink in. They don't understand what they're doing, they just massage the spec into prompts and when it appears to work and pass tests they call it good.
We looked at the prompts. They were insane. They actually just kept adding more specification to the end, but if you read through it all it had contradictory logic, which I would have hoped the AI would have pointed out, but nope. It was actually just easier for me and another senior to rewrite the logic as pseudo-code, cut the size down by literally 3/4, and eventually got it all working as expected.
So that's the future, girls and boys. People putting together code they don't understand with AI, and can only maintain with AI, and then not being able to fix with AI because they cannot prompt accurately enough because English sucks at being precise.