[2003.03546] Adversarial Machine Studying: Bayesian Views


Obtain a PDF of the paper titled Adversarial Machine Studying: Bayesian Views, by David Rios Insua and three different authors

Obtain PDF
HTML (experimental)

Summary:Adversarial Machine Studying (AML) is rising as a significant area aimed toward defending machine studying (ML) methods towards safety threats: in sure situations there could also be adversaries that actively manipulate enter knowledge to idiot studying methods. This creates a brand new class of safety vulnerabilities that ML methods might face, and a brand new fascinating property referred to as adversarial robustness important to belief operations primarily based on ML outputs. Most work in AML is constructed upon a game-theoretic modelling of the battle between a studying system and an adversary, prepared to control enter knowledge. This assumes that every agent is aware of their opponent’s pursuits and uncertainty judgments, facilitating inferences primarily based on Nash equilibria. Nonetheless, such frequent information assumption just isn’t real looking within the safety situations typical of AML. After reviewing such game-theoretic approaches, we talk about the advantages that Bayesian views present when defending ML-based methods. We show how the Bayesian method permits us to explicitly mannequin our uncertainty concerning the opponent’s beliefs and pursuits, stress-free unrealistic assumptions, and offering extra strong inferences. We illustrate this method in supervised studying settings, and determine related future analysis issues.

Submission historical past

From: Victor Gallego [view email]
Sat, 7 Mar 2020 10:30:43 UTC (203 KB)
Thu, 22 Feb 2024 14:32:28 UTC (441 KB)

Supply hyperlink


Please enter your comment!
Please enter your name here