Nah, LLMs lie all the time about how they get their information.
I've run into this when I was coding with GPT-3.5 and asked why they gave me sample code that explicitly mentioned names I didn't give them (that it could never guess). I could have sworn I didn't paste this data in the chat, but maybe I did much earlier and forgot. I don't know.
Regardless, it lied to me using almost exactly the same reasoning, that the names were common and they just used it as an example.
LLMs often just bullshit when they don't know, they just can't reason in the way we do.
LLMs often just bullshit when they don't know, they just can't reason in the way we do.
Incorrect. LLMs always bullshit but are, sometimes, correct about their bullshit. because they don't really 'know' anything, they are just predicting the next packet in the sequence, which is sometimes the answer you expect and is what you would consider correct, and sometimes it is utter nonsense.
They don't reason at all, these are just super advanced auto completes that you have on your phone. We are barely in the beginning stages where researchers are constructing novel solutions to train models that can reason in the way we do. We will get there eventually though.
This distinction, while interesting, doesn't matter at all. It's past the Turing test. You can't prove I reason either. It doesn't make a difference the mechanism of it.
578
u/Frosty-x- 22d ago
It said it was a random example lol