When what we want as individuals clashes with what is best for the group, we have a social dilemma. How can we overcome these dilemmas, and encourage people to cooperate, even if they have reason not to? In a paper released today in Nature, Christian Hilbe and Krishnendu Chatterjee of the Institute of Science and Technology Austria (IST Austria), together with Martin Nowak of Harvard and Stepan Simsa of Charles University, have shown that if the social dilemma that individuals face is dependent on whether or not they work together, cooperation can triumph. This finding was the result of a new type of framework that they introduced — one that extends the entire theory of repeated games. Moreover, as their work pinpoints the ideal conditions for fostering cooperation, they have provided tools to systematically build cooperation.
The tragedy of the commons: if we can (ab)use a public good without seeing negative consequences, we will — without consideration of others or the future. We see examples of this in our daily lives, from climate change and forest depletion down to the stack of dirty dishes in the office kitchen. In game theory, scientists have used repeated games — repeated interactions where individuals face the same social dilemma each time — to understand when individuals choose to cooperate, i.e. their strategies. However, these games have always kept the value of the public resource constant, no matter how players acted in the previous round — something that does not reflect reality of the situation.
In their new framework, Hilbe, Simsa, Chatterjee, and Nowak consider repeated games in which cooperation does not only affect the players’ present payoffs, but also which game they face in the next round. “Repeated games have been studied intensely for over 40 years, and significant new developments are rare — especially such simple ones,” says Martin Nowak. “This addition actually extends the whole theory of repeated games, as a fixed environment is a special case of our new framework.”
When they explored the new model, the scientists found that this dependence on players’ actions could greatly increase the chance that players cooperate — provided the right conditions were in place. “Our framework shows which kinds of feedback are most likely to lead to cooperation,” says first author Christian Hilbe. These include, for instance, how quickly the resource degrades or how easy it is to return to a more valuable state. “Using this knowledge, you can design systems that maximize cooperation, or create an environment that encourages people to work together,” he adds. For example, these ideas could even be implemented by a business or corporation, to create a work community that encourages working together.
The new research project also demonstrates how cooperations between fields of research can yield valuable results. “Working with computer scientists has been extremely rewarding for me as a biologist,” adds Nowak. “The tools and perspectives they bring with them have had and will have a significant impact on what we can do.”
Source: Read Full Article