bnelearn.mechanism.matrix_games module

Matrix Games

class bnelearn.mechanism.matrix_games.BattleOfTheSexes(**kwargs)[source]

Bases: MatrixGame

Two player, two action Battle of the Sexes game

class bnelearn.mechanism.matrix_games.BattleOfTheSexesMod(**kwargs)[source]

Bases: MatrixGame

Modified Battle of the Sexes game

class bnelearn.mechanism.matrix_games.JordanGame(**kwargs)[source]

Bases: MatrixGame

Jordan Anticoordination game (1993), FP does not converge. 3P version of Shapley fashion game: Player Actions: (Left, Right) P1 wants to be different from P2 P2 wants to be different from P3 P3 wants to be different from P1

class bnelearn.mechanism.matrix_games.MatchingPennies(**kwargs)[source]

Bases: MatrixGame

Two Player, two action Matching Pennies / anticoordination game

class bnelearn.mechanism.matrix_games.MatrixGame(n_players: int, outcomes: Tensor, cuda: bool = True, names: Optional[dict] = None, validate_inputs: bool = True)[source]

Bases: Game

A complete information Matrix game.

TODO: missing documentation

calculate_expected_action_payoffs(strategy_profile, player_position)[source]

Calculates the expected utility for a player under a mixed opponent strategy

Args:
strategy_profile: List of action-probability-vectors for each player.

Player i’s strategy must be supplied but is ignored.

player_position: player of interest

Returns:

expected payoff per action of player i (tensor of dimension (1 x n_actions[i])

get_action_name(action_id: int)[source]

Currently only works if all players have same action set!

get_player_name(player_id: int)[source]

Returns readable name of player if provided.

play(action_profile)[source]

Plays the game for a given action_profile.

Parameters

action_profile: torch.Tensor

Shape: (batch_size, n_players, n_items) n_items should be 1 for now. (This might change in the future to represent information sets!) Actions should be integer indices. #TODO: Ipmlement that they can also be action names!

Mixed strategies are NOT allowed as input, sampling should happen in the player class.

Returns

(allocation, payments): Tuple[torch.Tensor, torch.Tensor]
allocation: tensor of dimension (n_batches x n_players x n_items),

In this setting, there’s nothing to be allocated, so it will be all zeroes.

payments: tensor of dimension (n_batches x n_players)

Negative outcome/utility for each player.

play_mixed(strategy_profile: List[Tensor], validate: Optional[bool] = None)[source]

Plays the game with mixed strategies, returning expectation of outcomes.

This version does NOT support batches or multiple items, as (1) batches do not make sense in this setting since we are already returning expectations.

Parameters

strategy_profile: List[torch.Tensor]. A list of strategies for each player.

Each element i should be a 1-dimensional torch tensor of length n_actions_pi with entries j = P(player i plays action j)

validate: bool. Whether to validate inputs. Defaults to setting in game class.

(Validation overhead is ~100%, so you might want to turn this off in settings with many many iterations)

Returns

(allocation, payments): Tuple[torch.Tensor, torch.Tensor]

allocation: empty tensor of dimension (0) –> not used in this game payments: tensor of dimension (n_players)

Negative expected outcome/utility for each player.

class bnelearn.mechanism.matrix_games.PaulTestGame(**kwargs)[source]

Bases: MatrixGame

A 3-p game without many symmetries used for testing n-player tensor implementations. Payoff: [M,R,C]

class bnelearn.mechanism.matrix_games.PrisonersDilemma(**kwargs)[source]

Bases: MatrixGame

Two player, two action Prisoner’s Dilemma game. Has a unique pure Nash equilibrium in ap [1,1]

class bnelearn.mechanism.matrix_games.RockPaperScissors(**kwargs)[source]

Bases: MatrixGame

2 player, 3 action game rock paper scissors