I have problems with this blog post, https://thomasrigby.com/posts/it-is-not-your-fault/, but this quote describes a big issue with LLMs:
'There are so many ways an LLM can provide an incorrect answer; the most common being "any answer is higher scoring than zero in the probabilistic sense".'
This is yet another ethical problem with LLM usage; you can't trust the output. Environmental concerns, violations of licensing (i.e., theft), employment concerns, and trust issues.