Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Related Rants

When you wanted to know deep learning immediately
That's not AI.
10 YOE, currently leading a team of 6. This has been bothering me for a few months and I don't have a good answer.
Two of my junior devs started using AI coding assistants heavily this year. Their output looks great. PRs are clean, tests pass, code compiles. On paper they look like they leveled up overnight.
But when I ask them questions during review, I can tell they don't fully understand what they wrote. Last week one of them couldn't explain why he used a particular data structure. He just said "that's what it suggested." The code worked fine but something about that interaction made me uncomfortable.
I've been reading about where the industry is going with this stuff. Came across the Open Source LLM Landscape 2.0 report from Ant Open Source and their whole thesis is that AI coding is exploding because code has "verifiable outputs." It compiles or it doesn't. Tests pass or fail. That's why it's growing faster than agent frameworks and other AI stuff.
But here's my problem. Code compiling and tests passing doesn't mean someone understood what they built. It doesn't mean they can debug it at 2am when something breaks in production. It doesn't mean they'll make good design decisions on the next project.
I feel like I'm evaluating theater now. The artifacts look senior but the understanding is still junior. And I don't know how to write that in a performance review without sounding like a dinosaur who hates AI.
Promoted one of these guys to mid level last quarter. Starting to wonder if that was a mistake.
rant
ai