This isn't an AI problem, its an operating systems problem.
AI is just so much less trustworthy than software written and read by humans, that it is exposing the problem for all to see.
Process isolation hasn't been taken seriously because UNIX didn't do a good job, and Microsoft didn't either.
Well designed security models don't sell computers/operating systems, apparently.
That's not to say that the solution is unknown, there are many examples of people getting it right.
Plan 9, SEL4, Fuschia, Helios, too many smaller hobby operating systems to count.
The problem is widespread poor taste. Decision makers (meaning software folks who are in charge of making technical decisions) don't understand why these things are important, or can't conceive of the correct way to build these systems.
It needs to become embarrassing for decision makers to not understand sandboxing technologies and modern security models, and anyone assuming we can trust software by default needs to be laughed out of the room.