>
Any news content created using generative AI must also be reviewed by a human employee “with editorial control” before publication.To emphasize this: it's important that the organization assume responsibility, just as they would with traditional human-generated 'content'.
What we don't want is for these disclaimers to be used like the disclaimers of tech companies deploying AI: to try to weasel out of responsibility.
"Oh no, it's 'AI', who could have ever foreseen the possibility that it would make stuff up, and lie about it confidently, with terrible effects. Aw, shucks: AI, what can ya do. We only designed and deployed this system, and are totally innocent of any behavior of the system."
Also don't turn this into a compliance theatre game, like we have with information security.
"We paid for these compliance products, and got our certifications, and have our processes, so who ever could have thought we'd be compromised."
(Other than anyone who knows anything about these systems, and knows that the stacks and implementation and processes are mostly a load of performative poo, chosen by people who really don't care about security.)
Hold the news orgs responsible for 'AI' use. The first time a news report wrongly defames someone, or gets someone killed, a good lawsuit should wipe out all their savings on staffing.