An integrated multilevel learning approach to multiagent coalition formation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Citations (Scopus)

Abstract

In this paper we describe an integrated multilevel learning approach to multiagent coalition formation in a real-time environment. In our domain, agents negotiate to form teams to solve joint problems. The agent that initiates a coalition shoulders the responsibility of overseeing and managing the formation process. A coalition formation process consists of two stages. During the initialization stage, the initiating agent identifies the candidates of its coalition, i.e., known neighbors that could help. The initiating agent negotiates with these candidates during the finalization stage to determine the neighbors that are willing to help. Since our domain is dynamic, noisy, and time-constrained, the coalitions are not optimal. However, our approach employs learning mechanisms at several levels to improve the quality of the coalition formation process. At a tactical level, we use reinforcement learning to identify viable candidates based on their potential utility to the coalition, and case-based learning to refine negotiation strategies. At a strategic level, we use distributed, cooperative case- based learning to improve general negotiation strategies. We have implemented the above three learning components and conducted experiments in multisensor target tracking and CPU re-allocation applications.

Original languageEnglish (US)
Title of host publicationIJCAI International Joint Conference on Artificial Intelligence
Pages619-624
Number of pages6
StatePublished - 2003
Event18th International Joint Conference on Artificial Intelligence, IJCAI 2003 - Acapulco, Mexico
Duration: Aug 9 2003Aug 15 2003

Other

Other18th International Joint Conference on Artificial Intelligence, IJCAI 2003
CountryMexico
CityAcapulco
Period8/9/038/15/03

Fingerprint

Reinforcement learning
Target tracking
Program processors
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Soh, L-K., & Li, X. (2003). An integrated multilevel learning approach to multiagent coalition formation. In IJCAI International Joint Conference on Artificial Intelligence (pp. 619-624)

An integrated multilevel learning approach to multiagent coalition formation. / Soh, Leen-Kiat; Li, Xin.

IJCAI International Joint Conference on Artificial Intelligence. 2003. p. 619-624.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Soh, L-K & Li, X 2003, An integrated multilevel learning approach to multiagent coalition formation. in IJCAI International Joint Conference on Artificial Intelligence. pp. 619-624, 18th International Joint Conference on Artificial Intelligence, IJCAI 2003, Acapulco, Mexico, 8/9/03.
Soh L-K, Li X. An integrated multilevel learning approach to multiagent coalition formation. In IJCAI International Joint Conference on Artificial Intelligence. 2003. p. 619-624
Soh, Leen-Kiat ; Li, Xin. / An integrated multilevel learning approach to multiagent coalition formation. IJCAI International Joint Conference on Artificial Intelligence. 2003. pp. 619-624
@inproceedings{1b0d95127e944c158d5ccc3ae912b1a4,
title = "An integrated multilevel learning approach to multiagent coalition formation",
abstract = "In this paper we describe an integrated multilevel learning approach to multiagent coalition formation in a real-time environment. In our domain, agents negotiate to form teams to solve joint problems. The agent that initiates a coalition shoulders the responsibility of overseeing and managing the formation process. A coalition formation process consists of two stages. During the initialization stage, the initiating agent identifies the candidates of its coalition, i.e., known neighbors that could help. The initiating agent negotiates with these candidates during the finalization stage to determine the neighbors that are willing to help. Since our domain is dynamic, noisy, and time-constrained, the coalitions are not optimal. However, our approach employs learning mechanisms at several levels to improve the quality of the coalition formation process. At a tactical level, we use reinforcement learning to identify viable candidates based on their potential utility to the coalition, and case-based learning to refine negotiation strategies. At a strategic level, we use distributed, cooperative case- based learning to improve general negotiation strategies. We have implemented the above three learning components and conducted experiments in multisensor target tracking and CPU re-allocation applications.",
author = "Leen-Kiat Soh and Xin Li",
year = "2003",
language = "English (US)",
pages = "619--624",
booktitle = "IJCAI International Joint Conference on Artificial Intelligence",

}

TY - GEN

T1 - An integrated multilevel learning approach to multiagent coalition formation

AU - Soh, Leen-Kiat

AU - Li, Xin

PY - 2003

Y1 - 2003

N2 - In this paper we describe an integrated multilevel learning approach to multiagent coalition formation in a real-time environment. In our domain, agents negotiate to form teams to solve joint problems. The agent that initiates a coalition shoulders the responsibility of overseeing and managing the formation process. A coalition formation process consists of two stages. During the initialization stage, the initiating agent identifies the candidates of its coalition, i.e., known neighbors that could help. The initiating agent negotiates with these candidates during the finalization stage to determine the neighbors that are willing to help. Since our domain is dynamic, noisy, and time-constrained, the coalitions are not optimal. However, our approach employs learning mechanisms at several levels to improve the quality of the coalition formation process. At a tactical level, we use reinforcement learning to identify viable candidates based on their potential utility to the coalition, and case-based learning to refine negotiation strategies. At a strategic level, we use distributed, cooperative case- based learning to improve general negotiation strategies. We have implemented the above three learning components and conducted experiments in multisensor target tracking and CPU re-allocation applications.

AB - In this paper we describe an integrated multilevel learning approach to multiagent coalition formation in a real-time environment. In our domain, agents negotiate to form teams to solve joint problems. The agent that initiates a coalition shoulders the responsibility of overseeing and managing the formation process. A coalition formation process consists of two stages. During the initialization stage, the initiating agent identifies the candidates of its coalition, i.e., known neighbors that could help. The initiating agent negotiates with these candidates during the finalization stage to determine the neighbors that are willing to help. Since our domain is dynamic, noisy, and time-constrained, the coalitions are not optimal. However, our approach employs learning mechanisms at several levels to improve the quality of the coalition formation process. At a tactical level, we use reinforcement learning to identify viable candidates based on their potential utility to the coalition, and case-based learning to refine negotiation strategies. At a strategic level, we use distributed, cooperative case- based learning to improve general negotiation strategies. We have implemented the above three learning components and conducted experiments in multisensor target tracking and CPU re-allocation applications.

UR - http://www.scopus.com/inward/record.url?scp=84880805204&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880805204&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84880805204

SP - 619

EP - 624

BT - IJCAI International Joint Conference on Artificial Intelligence

ER -