Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning

Prithviraj Dasgupta, Joseph Collins

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this position paper, we propose a game theoretic formulation of the adversarial learning problem called a Repeated Bayesian Stackelberg Game (RBSG) that can be used by a prediction mechanism to make itself robust against adversarial examples.

Original languageEnglish (US)
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Pages194-195
Number of pages2
ISBN (Electronic)9781577357940
StatePublished - Jan 1 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: Nov 9 2017Nov 11 2017

Publication series

NameAAAI Fall Symposium - Technical Report
VolumeFS-17-01 - FS-17-05

Other

Other2017 AAAI Fall Symposium
CountryUnited States
CityArlington
Period11/9/1711/11/17

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Dasgupta, P., & Collins, J. (2017). Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning. In FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind (pp. 194-195). (AAAI Fall Symposium - Technical Report; Vol. FS-17-01 - FS-17-05). AI Access Foundation.

Position paper : Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning. / Dasgupta, Prithviraj; Collins, Joseph.

FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. AI Access Foundation, 2017. p. 194-195 (AAAI Fall Symposium - Technical Report; Vol. FS-17-01 - FS-17-05).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Dasgupta, P & Collins, J 2017, Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning. in FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. AAAI Fall Symposium - Technical Report, vol. FS-17-01 - FS-17-05, AI Access Foundation, pp. 194-195, 2017 AAAI Fall Symposium, Arlington, United States, 11/9/17.
Dasgupta P, Collins J. Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning. In FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. AI Access Foundation. 2017. p. 194-195. (AAAI Fall Symposium - Technical Report).
Dasgupta, Prithviraj ; Collins, Joseph. / Position paper : Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning. FS-17-01: Artificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind. AI Access Foundation, 2017. pp. 194-195 (AAAI Fall Symposium - Technical Report).
@inproceedings{0a2954e96fa44d6f8bb717f196c99afa,
title = "Position paper: Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning",
abstract = "In this position paper, we propose a game theoretic formulation of the adversarial learning problem called a Repeated Bayesian Stackelberg Game (RBSG) that can be used by a prediction mechanism to make itself robust against adversarial examples.",
author = "Prithviraj Dasgupta and Joseph Collins",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
series = "AAAI Fall Symposium - Technical Report",
publisher = "AI Access Foundation",
pages = "194--195",
booktitle = "FS-17-01",
address = "United States",

}

TY - GEN

T1 - Position paper

T2 - Towards a repeated Bayesian stackelberg game model for robustness against adversarial learning

AU - Dasgupta, Prithviraj

AU - Collins, Joseph

PY - 2017/1/1

Y1 - 2017/1/1

N2 - In this position paper, we propose a game theoretic formulation of the adversarial learning problem called a Repeated Bayesian Stackelberg Game (RBSG) that can be used by a prediction mechanism to make itself robust against adversarial examples.

AB - In this position paper, we propose a game theoretic formulation of the adversarial learning problem called a Repeated Bayesian Stackelberg Game (RBSG) that can be used by a prediction mechanism to make itself robust against adversarial examples.

UR - http://www.scopus.com/inward/record.url?scp=85044442878&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85044442878&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85044442878

T3 - AAAI Fall Symposium - Technical Report

SP - 194

EP - 195

BT - FS-17-01

PB - AI Access Foundation

ER -