Agent sensing with stateful resources

Adam Eck, Leen-Kiat Soh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In the application of multi-agent systems to real-world problems, agents often suffer from bounded rationality where agent reasoning is limited by 1) a lack of knowledge about choices, and 2) a lack of resources required for reasoning. To overcome the former, the agent uses sensing to refine its knowledge. However, sensing can also require limited resources, leading to inaccurate environment modeling and poor decision making. In this paper, we consider a novel and difficult class of this problem where agents must use stateful resources during sensing, which we define as resources whose state-dependent behavior changes over time based on usage. Specifically, such sensing changes the state of a resource, and thus its behavior, producing a phenomenon where the sensing activity can and will distort its own outcome. We term this the Observer Effect after the similar phenomenon in the physical sciences. Given this effect, the agent faces a strategic tradeoff between satisfying the need for 1) knowledge refinement, and 2) avoiding corruption of knowledge due to distorted sensing outcomes. To address this tradeoff, we use active perception to select sensing activities and model activity selection as a Markov decision process (MDP) solved through reinforcement learning where an agent optimizes knowledge refinement while considering the state of the resource used during sensing.

Original languageEnglish (US)
Title of host publication10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages1211-1212
Number of pages2
Volume2
StatePublished - 2011
Event10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011 - Taipei
Duration: May 2 2011May 6 2011

Other

Other10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011
CityTaipei
Period5/2/115/6/11

Fingerprint

Reinforcement learning
Multi agent systems
Decision making

Keywords

  • Bounded rationality
  • Observer Effect
  • Sensing
  • Stateful resources

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Eck, A., & Soh, L-K. (2011). Agent sensing with stateful resources. In 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011 (Vol. 2, pp. 1211-1212). International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).

Agent sensing with stateful resources. / Eck, Adam; Soh, Leen-Kiat.

10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011. Vol. 2 International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2011. p. 1211-1212.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Eck, A & Soh, L-K 2011, Agent sensing with stateful resources. in 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011. vol. 2, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), pp. 1211-1212, 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011, Taipei, 5/2/11.
Eck A, Soh L-K. Agent sensing with stateful resources. In 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011. Vol. 2. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS). 2011. p. 1211-1212
Eck, Adam ; Soh, Leen-Kiat. / Agent sensing with stateful resources. 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011. Vol. 2 International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2011. pp. 1211-1212
@inproceedings{48c28273a468492e9238a3e965967d00,
title = "Agent sensing with stateful resources",
abstract = "In the application of multi-agent systems to real-world problems, agents often suffer from bounded rationality where agent reasoning is limited by 1) a lack of knowledge about choices, and 2) a lack of resources required for reasoning. To overcome the former, the agent uses sensing to refine its knowledge. However, sensing can also require limited resources, leading to inaccurate environment modeling and poor decision making. In this paper, we consider a novel and difficult class of this problem where agents must use stateful resources during sensing, which we define as resources whose state-dependent behavior changes over time based on usage. Specifically, such sensing changes the state of a resource, and thus its behavior, producing a phenomenon where the sensing activity can and will distort its own outcome. We term this the Observer Effect after the similar phenomenon in the physical sciences. Given this effect, the agent faces a strategic tradeoff between satisfying the need for 1) knowledge refinement, and 2) avoiding corruption of knowledge due to distorted sensing outcomes. To address this tradeoff, we use active perception to select sensing activities and model activity selection as a Markov decision process (MDP) solved through reinforcement learning where an agent optimizes knowledge refinement while considering the state of the resource used during sensing.",
keywords = "Bounded rationality, Observer Effect, Sensing, Stateful resources",
author = "Adam Eck and Leen-Kiat Soh",
year = "2011",
language = "English (US)",
volume = "2",
pages = "1211--1212",
booktitle = "10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011",
publisher = "International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)",

}

TY - GEN

T1 - Agent sensing with stateful resources

AU - Eck, Adam

AU - Soh, Leen-Kiat

PY - 2011

Y1 - 2011

N2 - In the application of multi-agent systems to real-world problems, agents often suffer from bounded rationality where agent reasoning is limited by 1) a lack of knowledge about choices, and 2) a lack of resources required for reasoning. To overcome the former, the agent uses sensing to refine its knowledge. However, sensing can also require limited resources, leading to inaccurate environment modeling and poor decision making. In this paper, we consider a novel and difficult class of this problem where agents must use stateful resources during sensing, which we define as resources whose state-dependent behavior changes over time based on usage. Specifically, such sensing changes the state of a resource, and thus its behavior, producing a phenomenon where the sensing activity can and will distort its own outcome. We term this the Observer Effect after the similar phenomenon in the physical sciences. Given this effect, the agent faces a strategic tradeoff between satisfying the need for 1) knowledge refinement, and 2) avoiding corruption of knowledge due to distorted sensing outcomes. To address this tradeoff, we use active perception to select sensing activities and model activity selection as a Markov decision process (MDP) solved through reinforcement learning where an agent optimizes knowledge refinement while considering the state of the resource used during sensing.

AB - In the application of multi-agent systems to real-world problems, agents often suffer from bounded rationality where agent reasoning is limited by 1) a lack of knowledge about choices, and 2) a lack of resources required for reasoning. To overcome the former, the agent uses sensing to refine its knowledge. However, sensing can also require limited resources, leading to inaccurate environment modeling and poor decision making. In this paper, we consider a novel and difficult class of this problem where agents must use stateful resources during sensing, which we define as resources whose state-dependent behavior changes over time based on usage. Specifically, such sensing changes the state of a resource, and thus its behavior, producing a phenomenon where the sensing activity can and will distort its own outcome. We term this the Observer Effect after the similar phenomenon in the physical sciences. Given this effect, the agent faces a strategic tradeoff between satisfying the need for 1) knowledge refinement, and 2) avoiding corruption of knowledge due to distorted sensing outcomes. To address this tradeoff, we use active perception to select sensing activities and model activity selection as a Markov decision process (MDP) solved through reinforcement learning where an agent optimizes knowledge refinement while considering the state of the resource used during sensing.

KW - Bounded rationality

KW - Observer Effect

KW - Sensing

KW - Stateful resources

UR - http://www.scopus.com/inward/record.url?scp=84899433696&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84899433696&partnerID=8YFLogxK

M3 - Conference contribution

VL - 2

SP - 1211

EP - 1212

BT - 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011

PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)

ER -