World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.
It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.
It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.
> I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea
This sentence and the rest of the post reads like an horoscope advice. Like "It can be good if you use it well, it may be bad if you don't". It's pretty much the same as saying a coin may land on head or on tail.
They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.
> Including how it looks at the surrounding code and patterns.
Citation needed. Even with specific examples, “follow the patterns from the existing tests”, etc copilot (gpt 5) still insists on generating tests using the wrong methods (“describe” and “it” in a codebase that uses “suite” and “test”).
An intern, even an intern with a severe cognitive disability, would not be so bad at pattern following.
Every time a new model or tool comes out, the AI boosters love to say that n-1 was garbage and finally AI vibecoding is the real deal and it will make you 10x more productive.
Except six months ago n-1 was n and the boosters were busy ruining their credibility saying that their garbage tier AI was world class and making them 10x more productive.
Today’s leading world-class agentic model is tomorrow’s horrible garbage tier slop generator that was patently never good enough to be taken seriously.
This has been going on for years, the pattern is obvious and undeniable.
I can obviously only speak for myself, but I've tried AI coding tools from time to time and with Opus 4.5 I have for the first time the impression that it is genuinely helpful for a variety of tasks. I've never previously claimed that I find them useful. And 10x more productive? Certainly not, even if it would improve development speed 10000x I wouldn't be 10x more productive overall since not even half of my time is directed towards development efforts.
Finance people are funny. They are so wrong when you hear their logic and references, but I also realized it doesn't matter. It is trends they try to predict, fuzzy directional signals, not facts of the moment.
Of course not, why would they? They understand making money, and what makes money right now? What would be antithetical to making money? Why might we be doing one thing and not another? The lines are bright and red and flashing.
> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.
Wow, finance people certainly don't understand programming.