Opponent modeling

From EverybodyWiki Bios & Wiki

Opponent modeling is the ability to recognize and anticipate the moves and strategy of an opponent.[1][2] In gaming, the model is an abstracted description of the opponent and his strategy, based on the opponent's behavior in the game.[2] Opponent modeling is used as a method of exploiting sub-optimal opponents.[3] It is the high-level goal of computing the "best" or "least exploitable" strategy for the opponent that is consistent with observations of the opponent's behavior.[3]

Game theory[edit]

Opponent modeling has been done in various games.

In the game of Scrabble, an artificial intelligence using opponent modeling was created to play against another artificial intelligence which also used simulations but made no assumptions about the letters on other players' Scrabble racks.[4] Opponent modeling was done using Bayes' theorem. With the simple model, significant improvement was shown over the other Scrabble program. The model can serve as a suitable substitute for the intractable partially observable Markov decision process.[4]

Real world[edit]

Stochastic Opponent Modeling Agents (SOMA) have been proposed for reasoning about cultural groups, terror groups, and other socioeconomic-political-military organizations worldwide. SOMA has been used to model the behavior of terrorist organizations such as Hezbollah.[5] More than 14,000 SOMA rules for Hezbollah were automatically derived from one study.[5] Key findings involved Hezbollah's behavior in kidnapping campaigns and transnational attacks. The strongest condition linked to a Hezbollah kidnapping campaign was found to be the solicitation of external support.[5] In the Middle East, kidnapping campaigns against the West and Israel were useful for raising an organization’s profile, making it a more attractive candidate for support.[5] Kidnapping creates hostages which serve as bargaining chips, where Hezbollah can either attempt to extract support from the hostages' nation of origin or give potential supporters the opportunity to act as an interlocutor between Hezbollah and that nation.[5] During the Lebanese Civil War, when Hezbollah efforts to obtain external support were greater, they were more likely to curtail their kidnapping activity, possibly in response to pressures from potential supporters.[5] Examining SOMA rules revealed possible reasons why war didn't come to the region in 2008. Early in 2007, domestic tensions between different Lebanese parties boiled over into large-scale protests and riots. Hezbollah has a low likelihood of engaging in transnational violence when there are major inter-organizational conflicts.[5]

See also[edit]

  • Dynamic game difficulty balancing
  • Case-based reasoning
  • Game theory#Description and modeling
  • Best response
  • Nash equilibrium
  • Reinforcement learning
  • Q-learning

>> Sources to consider[edit]

SARTRE plays the game of 2-player, limit Texas Hold'em.


A Nash equilibrium is a robust, static strategy that attempts to limit its exploitability against a worst-case opponent.

In general, a set of strategies are said to be in equilibrium if the result of one player diverging from their equilibrium strategy (while all other players stick to their current strategy) results in a negative impact on the expected value for the player who modified their strategy.


The focus of this paper is on generating case-based poker strategies.

When a new problem is encountered similar cases are retrieved from the case-base of poker hands and their solutions are adapted or re-used to solve the problem.

By simply updating the case-base, different types of players can be modelled without relying on the creation of complicated mathematical models or algorithms.

Advances in Artificial Intelligence – SBIA 2008 19th Brazilian Symposium on Artificial Intelligence Salvador, Brazil, October 2008 Proceedings

Gerson Zaverucha, Augusto Loureiro da Costa (Eds.)

pp 83-92

An Experimental Approach to Online Opponent Modeling in Texas Hold'em Poker

Opponent modeling in the RTS game "Spring".

Opponent Modeling in Deep Reinforcement Learning


  1. Schadd, Frederik, Sander Bakkes, and Pieter Spronck. "Opponent Modeling in Real-Time Strategy Games" (PDF). Universiteit Maastricht. Retrieved 2018-03-04.CS1 maint: Uses authors parameter (link)
  2. 2.0 2.1 Avontuur, Tetske (December 2010). "Opponent Modelling in Wargus". Tilburg University. Retrieved 2018-03-04.
  3. 3.0 3.1 Ganzfried, Sam and Tuomas Sandholm. "Game Theory-Based Opponent Modeling in Large Imperfect-Information Games" (PDF). Carnegie Mellon University.CS1 maint: Uses authors parameter (link)
  4. 4.0 4.1 Mark Richards and Eyal Amir (2007). "Opponent Modeling in Scrabble" (PDF). University of Illinois at Urbana–Champaign. Retrieved 2018-03-04.CS1 maint: Uses authors parameter (link)
  5. 5.0 5.1 5.2 5.3 5.4 5.5 5.6 Mannes, Aaron, Mary Michael, et. al. "Social Computing, Behavioral Modeling, and Prediction – Stochastic Opponent Modeling Agents: A Case Study with Hezbollah" (PDF). Springer: 37–45.CS1 maint: Uses authors parameter (link)

This article "Opponent modeling" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Opponent modeling. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.