Closing Remarks

Main Menu           Previous Topic                                                           Next Topic


Throughout this tutorial, we play with a specific yet interesting simulation scenario. We imagine a Toroidal planet populated by countries that could either be cooperative but tough (i.e. employing the Tit-for-Tat strategy), or just plain mean to their neighbors (i.e. employing the All-D strategy). From time to time, these countries learn from each other, and adopt strategies of successful neighbors. Our conclusions are quite similar to those of "The Evolution of Cooperation", and may be applied with some modifications to a variety of real world scenarios.

Because of the perpetual learning process, the situation is only static when all the countries have adopted the same strategy. Furthermore, a planet of Tit-for-Tats will do a whole lot better than a planet of meanies - in fact, each country's payoff will be three times as great. Which brings us to the question: how can we tip the balance in favor of the all-cooperation outcome?

First of all, it became clear at the end of the second topic that a stronger shadow of the future always makes Tit-for-Tats do relatively better, as it ensures that All-Defection agents take advantage of them less often. On the other hand, we saw that even for a fixed shadow of the future parameter, varying the degree of geographic constraint can have a strong effect on the final outcome.

In particular, stronger geographic constraints always make the all-cooperation outcome more likely. This happens both when an All-D minority is injected in a population of Tit-for-Tats, and when the roles are reversed, and the invading minority employs the Tit-for-Tat strategy instead. The reason is that in order to do well, Tit-for-Tats have to stick together, accumulate large payoffs, and thus serve as strong role models for the neighboring agents.

                   Previous Slide                                                           Next Slide