[2306.13119] Adversarial Resilience in Sequential Prediction through Abstention

0
31


Obtain a PDF of the paper titled Adversarial Resilience in Sequential Prediction through Abstention, by Surbhi Goel and three different authors

Obtain PDF

Summary:We examine the issue of sequential prediction within the stochastic setting with an adversary that’s allowed to inject clean-label adversarial (or out-of-distribution) examples. Algorithms designed to deal with purely stochastic knowledge are likely to fail within the presence of such adversarial examples, typically resulting in misguided predictions. That is undesirable in lots of high-stakes purposes equivalent to medical suggestions, the place abstaining from predictions on adversarial examples is preferable to misclassification. Alternatively, assuming absolutely adversarial knowledge results in very pessimistic bounds which are typically vacuous in follow.

To seize this motivation, we suggest a brand new mannequin of sequential prediction that sits between the purely stochastic and absolutely adversarial settings by permitting the learner to abstain from making a prediction for free of charge on adversarial examples. Assuming entry to the marginal distribution on the non-adversarial examples, we design a learner whose error scales with the VC dimension (mirroring the stochastic setting) of the speculation class, versus the Littlestone dimension which characterizes the absolutely adversarial setting. Moreover, we design a learner for VC dimension~1 lessons, which works even within the absence of entry to the marginal distribution. Our key technical contribution is a novel measure for quantifying uncertainty for studying VC lessons, which can be of impartial curiosity.

Submission historical past

From: Abhishek Shetty [view email]
[v1]
Thu, 22 Jun 2023 17:44:22 UTC (768 KB)
[v2]
Thu, 25 Jan 2024 02:44:52 UTC (36 KB)



Supply hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here