This is a big problem, though I would be slow to trust anyone purporting to address this problem. (Though, to their credit, this Books by People team is more credible than the bog-standard pair of 20yo Bay Area techbro grifters I expected.)
Reportedly, Kindle has already been flooded with "AI" generated books. And I've heard complaints from authors, of AI superficial rewritings of their own books being published by scammers. (So, not only "AI, write a YA novel, to the market, about a coming of age vampire young woman small town friends-to-lovers romance", but "AI, write a new novel in the style of Jane Smith, basically laundering previous things she's written" and "AI, copy the top-ranked fiction books in each category on Amazon, and substitute names of things, and how things are worded.")
For now, Kindle is already requiring publishers/authors to certify on which aspects of the books AI tools were used (e.g., text, illustrations, covers), something about how the tools were used (e.g., outright generation, assistive with heavy human work, etc.), and which tools were used. So that self-reporting is already being done somewhere, just not exposed to buyers yet.
That won't stop the dishonest, but at least it will help keep the honest writers honest. For example, if you, an honest writer, consider for a moment using generative AI to first-draft a scene, an awareness that you're required to disclose that generative AI use will give you pause, and maybe you decide that's not a direction you want to go with your work, nor how you want to be known.
Incidentally, I've noticed a lot of angry anti-generative-AI sentiment among creatives like writers and artists. Much more than among us techbros. Maybe the difference is that techbros are generally positioning ourselves to profit from AI, from copyright violations, selling AI products to others, and investment scams.