Depending on what you mean by "that", it is exactly the issue. I have been following AI since the 1970s, so the issues are nothing new to me.
All ChatGPT 'knows', if we can even use that term, is how to select the next word, given the previous ones. Truth, or accuracy are not even a consideration. In a sense it is always just "making stuff up", since it has no understanding (in the human sense) of what it is doing. What is surprising is that it does pretty well for the prose equivalent of elevator music.
Other than an increase in sophistication (and hardware requirements), we are not much beyond where we were in the 1970s. I remember implementing Eliza on my Apple ][, and I had some grad students who were convinced that the program was intelligent, even after showing them the source code. People hoping to make a buck from LLMs seem to be the easiest to convince.
All of this is an example of the overreach of formalism (and yes, I know what that means). Try reading McGilchrist's The Matter with Things for a discussion in way more depth than I have time for here.
Your point seems to be that ChatGPT makes mistakes (hallucinates) because it does not have access to information (in the Shannon sense). That was exactly my point. If a LLM (it probably does not need to be that large) had access to, for example, only examples of well-written WL code (e.g., the internals of Mathematica itself), then it would do a pretty good job at creating WL code -- especially for problems it had seen something similar to before.
I really suggest that you read the book.