In this paper, we ask the question, “How can we know when language models know, with confidence, the answer to a particular query?”
People also ask
What do language models know?
How do we know when language started?
Do large language models know what they are talking about?
How are language models evaluated?
Dec 2, 2020 · Abstract:Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense.
Nov 28, 2019 · In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this ...
In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process ...
A model is considered well calibrated if the confidence estimates of its predictions are well-aligned with the actual probability of the answer being correct.
People also search for
Do large language models know what they are talking about?
stackoverflow.blog › 2023/07/03 › do-la...
Jul 3, 2023 · They do know some things. They convert words, sentences, and documents into semantic vectors and know the relative meanings of pieces of ...
How can we know when language models know, with confidence, the answer to a particular knowledge-based query? • We examine from the point of view of calibration ...
Sep 21, 2021 · We examine three strong generative models—T5, BART, and GPT-2—and study whether their probabilities on QA tasks are well calibrated, finding the ...
This paper proposes mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to ...
Jun 16, 2023 · A pre-trained-only model would always answer something (possibly hallucinating) and not "reflect" about its lack of knowledge.
People also search for