how easily any of these systems can be altered to meet an individual or group’s agenda.Did somebody expect otherwise?
Isn't virtually everything an LLM produces really just an "opinion" that is kinda/sorta based on it's training data?
As far as I know, there is no automated mechanism for verifying "truth" in chatbot output.