Our experiments addressing sentiment analysis, show that feature feedback methods perform significantly better on various natural out-of-domain datasets despite comparable in-domain evaluations. By contrast, performance on natural language inference remains comparable.
Oct 14, 2021
Feature feedback methods perform significantly better on various natural out-of-domain datasets despite comparable in-domain evaluations.
This work speculates that while existing methods for incorporating feature feedback have delivered negligible in-sample performance gains, ...
Request PDF | On Jan 1, 2022, Anurag Katakkar and others published Practical Benefits of Feature Feedback Under Distribution Shift | Find, read and cite all ...
Bibliographic details on Practical Benefits of Feature Feedback Under Distribution Shift.
Doing feature importance studies (as described in chapter 5) can help you detect over time if a model is biasing itself by giving more and more weight to a ...
For practical ML applications, there is often a need to verify the validity of model inputs, which can be done by checking if the value is within a specified ...
This study presents a technique for detecting out-of-distribution (OOD) samples in model-based optimization (MBO) guided by machine learning.
Apr 10, 2024 · We demonstrate that learned augmentations make models more robust and statistically fair in-distribution and out of distribution.
While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which specific features have ...