Welcome to the World of Rule Learning

 
 
 
 

Rule learning is an approach to learning in games in which the objects being learned are behavioral rules, such as: choose a best-response to the empirical distribution, or choose a Nash equilibrium. The learning dynamics are driven by the law of effect, in which successful rules become more likely to be used in the future, and vice versa. In contrast, standard models of learning apply the law of effect only to actions, which precludes the possibility of becoming more sophisticated (or anticipatory). Our papers on rule learning (see previous page) present statistical tests that demonstrate the importance of rule learning in explaining experimental data.

One of the shortcomings hitherto of the rule learning approach has been its computational complexity. To make this approach more accessible to other researchers, we have devised a version of Population Rule Learning, and a computationally efficient Fortran algorithm that estimates the population rule learning model on experimental data. From this page you may access a paper entitled "Population Rule Learning in Symmetric Normal-Form Games: The Model and Estimation Algorithm", which as the title suggests describes the theoretical model and the computational methods.

You may also access the Fortran code that estimates this population rule learning model on data. The first 80 lines of that code consist of comments about how to structure your data for the algorithm, and how to use it. Since this is a population learning model, it is assumed that your data consist of a number of sessions and a number of runs for each session, where a "run" is a number of periods in which the participants played one symmetric normal-form game using the mean-matching protocol with population feedback after each period.

To further help you in using this code, you may access the control file (rlrn.ctl) and corresponding data (choice.data and game_matrix.data), for which the code has been tested. These experimental data are discussed in the linked paper, and were used in "A Horse Race Among Action Reinforcement Learning Models",March 1999.

To view the paper on the rule-learning model and estimation algorithm, click here. .

To download the rule-learning Fortran code, you need a login and a password. Choose them and let us know by registering your infromation Your login and password will be activated soon after. After obtaining the password, you can download the code here.