×
On average, LLMs behave as post hoc explainers by providing explanations that are on par with existing methods, such as LIME and gradient-based methods, in terms of their faithfulness.
People also ask
Oct 27, 2023 · LLM-generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge.
We propose three novel approaches that exploit the in-context learning (ICL) capabilities of LLMs to explain the predictions made by other complex models.
Oct 9, 2023 · We propose a novel framework, In-Context Explainers, comprising of three novel approaches that exploit the ICL capabilities of LLMs to explain the predictions ...
Are Large Language Models Post Hoc Explainers? Nicholas Kroeger · Dan Ley · Satyapriya Krishna · Chirag Agarwal · Himabindu Lakkaraju
The notebooks folder contains demonstrations such as model training and model inspection. The outputs folder stores results from post-hoc explainers and LLM ...
Aug 11, 2024 · To this end, we propose a novel prompt that serves to position LLMs as post-hoc correctors, refining predictions made by an arbitrary ML model.
More specifically, we construct automated natural language rationales that embed insights from post hoc explanations to provide corrective signals to LLMs.
Jul 8, 2024 · From large language models to small logic programs: building global explanations from disagreeing local post-hoc explainers. Open access ...
Oct 9, 2023 · Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications.