There are problems LLMs can't solve (a proof)
If there is a problem that is out of a large language model's training distribution and requires greater context than the maximum window, that problem would be fundamentally unsolvable by LLMs.
If there is a problem that is out of a large language model's training distribution and requires greater context than the maximum window, that problem would be fundamentally unsolvable by LLMs.
If belief is necessary for understanding, and if LLMs lack beliefs, then LLMs cannot be said to understand in the human sense. Despite sharing a linguistic interface, LLM and human minds will likely differ categorically in experiential terms.