Matching an opponent's performance in a real-time, dynamic environment

Jeremy A. Glasser, Leen-Kiat Soh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we explore high-level, strategic learning in a real-lime environment. Our long-term goal is to create a computer game that provides a continuous challenge without ever being too difficult that discourages players or too easy that it bores players. Towards this goal, we propose an agent that is able to observe its environment, measure its performance against the human player(s), and carries out appropriate actions to maintain that challenge. The agent also learns about its reasoning process through reinforcement. We have applied our methodology to the video game Unreal Tournament 2003. The preliminary results are encouraging.

Original languageEnglish (US)
Title of host publicationProceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04
EditorsM. Kantardzic, O. Nasraoui, M. Milanova
Pages57-64
Number of pages8
StatePublished - Dec 1 2004
Event2004 International Conference on Machine Learning and Applications, ICMLA '04 - Louisville, KY, United States
Duration: Dec 16 2004Dec 18 2004

Publication series

NameProceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04

Conference

Conference2004 International Conference on Machine Learning and Applications, ICMLA '04
CountryUnited States
CityLouisville, KY
Period12/16/0412/18/04

Fingerprint

Computer games
Lime
Reinforcement

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Glasser, J. A., & Soh, L-K. (2004). Matching an opponent's performance in a real-time, dynamic environment. In M. Kantardzic, O. Nasraoui, & M. Milanova (Eds.), Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04 (pp. 57-64). (Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04).

Matching an opponent's performance in a real-time, dynamic environment. / Glasser, Jeremy A.; Soh, Leen-Kiat.

Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04. ed. / M. Kantardzic; O. Nasraoui; M. Milanova. 2004. p. 57-64 (Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Glasser, JA & Soh, L-K 2004, Matching an opponent's performance in a real-time, dynamic environment. in M Kantardzic, O Nasraoui & M Milanova (eds), Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04. Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04, pp. 57-64, 2004 International Conference on Machine Learning and Applications, ICMLA '04, Louisville, KY, United States, 12/16/04.
Glasser JA, Soh L-K. Matching an opponent's performance in a real-time, dynamic environment. In Kantardzic M, Nasraoui O, Milanova M, editors, Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04. 2004. p. 57-64. (Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04).
Glasser, Jeremy A. ; Soh, Leen-Kiat. / Matching an opponent's performance in a real-time, dynamic environment. Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04. editor / M. Kantardzic ; O. Nasraoui ; M. Milanova. 2004. pp. 57-64 (Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04).
@inproceedings{347d52f2969f4b0691ca3e13db0746cc,
title = "Matching an opponent's performance in a real-time, dynamic environment",
abstract = "In this paper, we explore high-level, strategic learning in a real-lime environment. Our long-term goal is to create a computer game that provides a continuous challenge without ever being too difficult that discourages players or too easy that it bores players. Towards this goal, we propose an agent that is able to observe its environment, measure its performance against the human player(s), and carries out appropriate actions to maintain that challenge. The agent also learns about its reasoning process through reinforcement. We have applied our methodology to the video game Unreal Tournament 2003. The preliminary results are encouraging.",
author = "Glasser, {Jeremy A.} and Leen-Kiat Soh",
year = "2004",
month = "12",
day = "1",
language = "English (US)",
isbn = "0780388232",
series = "Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04",
pages = "57--64",
editor = "M. Kantardzic and O. Nasraoui and M. Milanova",
booktitle = "Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04",

}

TY - GEN

T1 - Matching an opponent's performance in a real-time, dynamic environment

AU - Glasser, Jeremy A.

AU - Soh, Leen-Kiat

PY - 2004/12/1

Y1 - 2004/12/1

N2 - In this paper, we explore high-level, strategic learning in a real-lime environment. Our long-term goal is to create a computer game that provides a continuous challenge without ever being too difficult that discourages players or too easy that it bores players. Towards this goal, we propose an agent that is able to observe its environment, measure its performance against the human player(s), and carries out appropriate actions to maintain that challenge. The agent also learns about its reasoning process through reinforcement. We have applied our methodology to the video game Unreal Tournament 2003. The preliminary results are encouraging.

AB - In this paper, we explore high-level, strategic learning in a real-lime environment. Our long-term goal is to create a computer game that provides a continuous challenge without ever being too difficult that discourages players or too easy that it bores players. Towards this goal, we propose an agent that is able to observe its environment, measure its performance against the human player(s), and carries out appropriate actions to maintain that challenge. The agent also learns about its reasoning process through reinforcement. We have applied our methodology to the video game Unreal Tournament 2003. The preliminary results are encouraging.

UR - http://www.scopus.com/inward/record.url?scp=21244491968&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=21244491968&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:21244491968

SN - 0780388232

T3 - Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04

SP - 57

EP - 64

BT - Proceedings of the 2004 International Conference on Machine Learning and Applications, ICMLA '04

A2 - Kantardzic, M.

A2 - Nasraoui, O.

A2 - Milanova, M.

ER -