It took me some time to collect my thoughts on this.
One: I don't believe they have solved use-after-free. Marking memory freed and crashing at runtime is as good as checked bounds indexing. It turns RCE into DOS which is reasonable, but what would be much better is solving it provably at compile time to reject invalid programs (those that use memory after it has been deallocated). But enough about that.
I want to write about memory leaks. Solving memory leaks is not hard because automatically cleaning up memory is hard. This is a solved problem, and the domain of automatic memory management/reclamation aka garbage collection. However I don't think they've gone through the rigor to prove why this is significantly different than say, segmented stacks (where each stack segment is your arena). By "significantly different" you should be able to prove this enables language semantics that are not possible with growable stacks - not just nebulous claims about performance.
No, the hard part of solving memory leaks is that they need to be solved for a specific class of program: one that must handle resource exhaustion (otherwise - assume infinite memory; leaks are not a bug). The actual hard thing is when there are no memory leaks in the sense that your program has correctly cleaned itself up everywhere it is able and you are still exhausting resources and must selectively crash tasks (in O(1) memory, because you can't allocate), those tasks need to be able to handle being crashed, and they must not spawn so many more tasks as to overwhelm again. This is equivalent to the halting problem, by the way, so automatic solutions for the general case are provably impossible.
I don't believe that can be solved by semantically inventing an infinite stack. It's a hard architectural problem, which is why people don't bother to solve it - they assume infinite memory, crash the whole program as needed, and make a best effort at garbage collection.
All that said, this is a very interesting design space. We are trapped in the malloc/free model of the universe which are known performance and correctness pits and experimenting with different allocation semantics is a good thing. I like where C3 and Zig's heads are at here, because ignoring allocators is actually a huge problem in Rust in practice.