Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning

Prithviraj Dasgupta, Joseph Collins

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this position paper, we propose a game theoretic formulation of the adversarial learning problem called a Repeated Bayesian Stackelberg Game (RBSG) that can be used by a prediction mechanism to make itself robust against adversarial examples.

Original languageEnglish (US)
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Pages194-195
Number of pages2
ISBN (Electronic)9781577357940
StatePublished - Jan 1 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: Nov 9 2017Nov 11 2017

Publication series

NameAAAI Fall Symposium - Technical Report
VolumeFS-17-01 - FS-17-05

Other

Other2017 AAAI Fall Symposium
CountryUnited States
CityArlington
Period11/9/1711/11/17

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Dasgupta, P., & Collins, J. (2017). Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning. In FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind (pp. 194-195). (AAAI Fall Symposium - Technical Report; Vol. FS-17-01 - FS-17-05). AI Access Foundation.