This is a fundamental problem with AI: humans will treat it differently because it's AI. So you might come up with the perfect AI driving hardware and software that would mimic the best of human drivers but it's not a human and that changes the outcomes.
We've seen examples of this where in SF people put traffic cones on the hoods of Waymos to stop them, sometimes for good reasons (eg going through a road closed to construction) and sometimes probably not.
I can also imagine human drivers treating self-driving cars on the road very differently essentially through lack of fear. Cut one off? it has no driver who might in a fit of range run you off the road or pull a gun on you.
You see a similar sort of thing with apartment buildings in NYC. Many have doormen. Will a doorman prevent someone stealing something or seeking unauthorized entry? Probably not but most people aren't that determined. The presence of a human adds a whole bunch of risk factors that an AI won't.
We see it with alarms on houses. People are often way more afraid of dogs than alarms. Or even the potential of someone with a gun.
So if this car had a driver, this wouldn't have happened. I'm sure software can be written to deal with this particular situation but you will be fighting a neverending series of human behaviors that will only happen because there's no driver.