A great example of this is to ask AI to ingest and restate with detailed annotations advanced maths papers. This should be simple but the AI fails at this.
A lot of maths is terse. It can take years to grok a very advanced topic. Eg. The ABC conjecture is supposed to be solved by https://en.wikipedia.org/wiki/Inter-universal_Teichm%C3%BCll... but that theory is tough even for the smartest minds so it's still considered up in the air if it's solved or not, not enough mathematicians grok it yet to have a consensus. It's not disproven as nonsense, the paper appears to make sense. It's just that it's a very advanced topic that takes years to understand.
So as someone wanting to understand such topics you may be tempted to have AI read the paper and give annotations and summaries. You might be tempted to have AI give some numeric examples of formulas.
Guess what happens? COMPLETE AND TOTAL FAILURE. The AI can't do it. Because the paper has no online examples where people have written numeric examples and given annotations there's nothing for the AI to go off. It gives numeric examples with mistakes that don't even match the statement it's meant to be giving an example of. Often it gives up with statements like, "At this point the numeric example fails to solve the solution but you can imagine if it did". You can ask it to try and try again but it just keeps failing. Even simple and well known papers generally don't work unless there's already a simple explanation someone's already posted online that it can regurgitate.
Which is pretty damning right? Reading a paper, giving numeric examples of what the paper states and giving some plain english summaries to the most dense portions should be what a language processing system does best. We're not even asking it to come up with original ideas here. We're asking it to summarise well known mathematical papers. The only time i've seen it have success is if someone's already done such an explanation on mathsoverflow.