I'd say you have three options:
1. Reject it on the grounds of being too large to meaningfully review. Whether they used AI or not, this is effectively asking them to start over in an iterative process where you review every version of the thing and get to keep complexity in check. You'll need the right power and/or standing for this to be a reasonable option. At many organisations, you'd get into trouble for it as "blocking progress". If the people that pay you don't value reliability or maintainability, and you couldn't convince them that they should, that's a tough one, but it is how it is.
2. Actually review it in good faith: Takes a ton of time for large, over engineered changes, but as the reviewer, it is usually your job to understand the code and take on responsibility for it. You could propose to help out by addressing any issues you find yourself rather than making them do it, they might like that. This feels like a compromise, but you could still be seen as the person "blocking progress", despite, from my perspective, biting the bullet here.
3. Accept it without understanding it. For this you could _test_ it and give feedback on the behaviour, but you'd ignore the architecture, maintainability etc. You could still collaboratively improve it after it goes live. I've seen this happen to big (non-AI generated) PRs a lot. It's not always a bad thing. It might not be good code, but it could well be good business regardless.
Now, however you resolve it, it seems like this won't be the last time you'll struggle to work with that person. Can, and do they want to, change? Do you want to change? If you can't answer either of these questions with a yes, you'll probably want to look for ways of not working with them going forward.