On the other side of the equation I've been spending much more time on code-review on an open source project I maintain, because developers are much more productive and I still code-review at the same speed.
The real issue is that I can't trust the AI generated code, or trust the AI to code-review for me. Some repeated issues I see:
- In my experience the AI doesn't integrate well with the code that there is already there: it often rewrites functionality and tend not to adhere to the project's conventions, but rather use what it is trained on.
- The AI often lacks depth into more complex issues. And because it doesn't see the broader implication of changes, it often doesn't write the tests that would cover them. Developers that wrote the PRs accept the AI tests without much investigation into the code-base. Since the changes passes the (also insufficient) tests, they send the PR to code-review.
- With AI I think (?) I'm more often the one careful deep diving into the project and re-designing the generated code in the code-review. In a way it's an indirect re-prompting.
I'm very happy with the increased PRs: they push the project forward, with great ideas of what to implement, and I'm very happy about AI increased productivity. Also, with AI developers are bolder in their contributions.
But this doesn't scale -- or I'll spend all my time code-reviewing :) I hope the AIs get better quickly.