The Experiments

The first data set was obtained from an experiment conducted in the summer of 1994 [ see HSW (1996)]. The second data set is from an experiment conducted in the summer of 1997.

In the first experiment, 15 symmetric three-by-three game matrices were presented to participants in a laboratory setting. Only five of these games had a separate and distinguishable pure row strategy for each of the three level-n types considered. Only data on these five games will be analyzed in this paper.

In the second experiment, 24 symmetric three-by-three games were selected which satisfied certain properties: Each game matrix had a unique pure NE strategy, a unique level-1 strategy, and a unique level-2 strategy; all of which were distinct from each other. Each matrix cell contained a number from 0 to 99 where each matrix had one cell containing 0 and one cell containing 99. For game matrices 1 to 19, the hypothetical token payoff of any of the above types for choosing his best response strategy was at least 20 tokens greater than his hypothetical payoff to any other strategy. Game matrices 20 to 24 are identical to the five game matrices analyzed from the first experiment.

In both experiments, each decision matrix was presented on the computer screen. Each player played each of the games on a computer terminal with no feedback until all games had been played.

The amount of time allocated for the players to make a choice on all the games was two minutes per game in the first experiment and a minute and a half per game for the second experiment. Within the total time allotted, a player could revisit any game and revise his choice for that game. This feature was intended to increase the likelihood that a participant?s observed behavior came from a single model of other players. In the first experiment, an average of 0.768 choice revisions was recorded per player per game. In the second experiment, an average of 2.0625 revisions was recorded per player per game. This is an indication that players made good use of the option of revising their hypotheses.

In the first experiment, a participant, looking at the decision matrix as a row player, would enter a choice, using an on-screen calculator to assist him in making that choice. In the second experiment, each participant was compelled to enter a hypothesis for each game on the distribution of the other participants? choices. Then, the computer would calculate and display hypothetical payoffs for each pure strategy and the row strategy that would yield the player the highest token payoff would be highlighted and chosen by the computer for him.

To determine participant payoffs, after all games were played, we first computed each participant's "token earnings" for each game as Uis Pi-h, where Pi-h denotes the empirical distribution of all participants other than participant h in game i and Uis is the payoff vector for the player?s chosen strategy s in game i. Token earnings were then translated game-by-game into the percentage chance of winning $2.00 for a given game via the roll of three ten-sided dice, which generated a random number uniformly distributed on [0,99.9] for each game.

Extensive instructions were given before the start of the experiment to ensure that all participants understood the computer interface, and how their payoffs would be determined. Following a training period, each participant was given a screening test designed to ensure common knowledge among all players that all other players understood the basics of the game.

In the first experiment, three sessions of the experiment were run with three groups of 22, 15, and 21 participants respectively, for a total of 58 participants. The second experiment consisted of a single session with 22 players. The participants were upper division business, engineering, social science and natural science students at the University of Texas. The average payment per participant was $27.64 for a two and a half hour session in the first experiment and $22.64 for a one and a half hour session in the second experiment.