Is it just me, or does everything in this article (both pros and cons) apply equally well to traditional static analysis tools? It's striking that adding "AI" doesn't seem to reduce the false alert rate or give anything particularly smart that a linter couldn't do (their big example is recognizing deprecated methods which doesn't seem like it needs and LLM to me).
That's true - but the important contrast is that no-one's claiming that static analysis tools are solely sufficient for code review. Despite the inflammatory headline, I read this article (particularly the final section "Conclusion: People Still Matter") as trying to say "AI is one useful tool in your arsenal to _improve_ code review, but don't for God's sake rely on it solely or blindly" - trying to temper some of the dangerous enthusiasm.
I'm the author of the article. Yes, I agree. I think AI reviewers, in their current state, are essentially glorified linters. Much of what they excel at can already be achieved with linting. However, I believe their edge lies in spotting semantic mistakes, whereas linters are ideally suited for syntactic or stylistic issues.
3 comments