An Overview of Prisoner's Dilemma Research
For a single PD encounter, the obvious rational strategy is to Defect, even though this is a suboptimal solution. The DU strategy has been shown to be a stable decision in this type of situation.
The rational choice to Cooperate only becomes available in the IPD. When Agents know that they will encounter each other again, they perceive the efficacy of mutual Cooperation. The goal of PD research has been to determine the paths by which Cooperative or altruistic behavior could emerge from the selfish motives of individuals. We know that this is possible, for we live in a society based on cooperation, and we have benefit from it. How did we make that transition? There is no definitive answer, yet.
An Infinite or Indefinite IPD is more likely to promote Cooperative behavior than a Finite PD. Even after a long series of beneficial mutually Cooperative encounters, there will still be a strong temptation to Defect in the final encounter. An Agent anticipating this final Defect is him/herself tempted to Defect beforehand. A more colorful way to describe this motivation is by the Shadow of the Future. If the Shadow is long and heavy, the incentive is to Cooperate. As the Shadow becomes shorter, the temptation to Defect is greater. An Infinite or Indefinite IPD has an infinitely long Shadow.
Various PD decision-making strategies have competed in many computer IPD tournaments and simulations, each strategy vying to maximize its gain. One of the consistent winners has been TFT. It punishes Defection, and rewards Cooperation. It has a salutary effect in that, if your opponent plays TFT, then your best strategy is CU. A DU strategy has no chance in obtaining substantial gain over TFT. As long as TFT's first choice is Cooperate, it plays well with itself, always Cooperating. TFT has become the standard by which all other strategies are evaluated, including its own variants. But it is not perfect. For instance, TFT will never allow a disparity to grow between it and another strategy. Yet, TFT can never surpass another strategy in terms of gain.
TFT's efficacy depends on the strategies of those Agents against whom it plays. It won the first two rounds of international competition held through computer simulations. But these were a highly disparate population of strategies. TFT does not fare as well in other, more homogenous, environments. All strategies are subject to this constraint - they will perform differently when confronting different opponents/partners.
Also, just what is "success"? In computer simulations, success is the amount of gain a strategy is able to acquire through multiple encounters. This is the immediate goal for the individual Agents. But the larger purpose of this research is to explain how altruistic, ethical behavior can arise. This is another aspect of success.
Asynchronous IPD does not remove the dilemma of the rational temptation to Defect. The Agent with the final move will have powerful motivation to Defect. The other Agent will anticipate this temptation and will therefore be tempted to Defect beforehand. Both Agents, through backward induction, will face a temptation to Defect at the very first encounter.
Any strategy playing without a "memory", without remembering some aspect of previous encounters, reduces an IPD to a simple one-time PD. A strategy without memory will not be able to progress beyond the rational temptation to Defect.
The performance of TFT can deteriorate in several ways. If TFT is imperfect in some way (either TFT Agent makes a mistake 1% of the time), its performance dramatically drops. All it takes is for one Agent to make a mistaken Defect, and then the two ITFT's will be locked into a long series of mutual Defections, to the detriment of both. If there is any possibility of error, then a TFT strategy is greatly improved if it is somewhat forgiving. If TFT can just overlook that one mistaken defection (TF2T), then the vicious cycle of mutual Defection can be broken. But such generosity is only effective when error rate is small.
Evolution affects the performance of strategies. Many simulations have made their goal finding the single optimal strategy that succeeds over all others. This may not be possible. The final equilibrium may not be a single strategy, but a mix of strategies, each in different proportions. Or, there may not even be a final equilibrium. There may be repeating cycles where some strategies temporarily dominate, only to be eventually supplanted by other strategies. In turn, this new strategy may be overthrown by yet another. Or, there may never be phases of relative equilibrium - some simulations show widely fluctuating (chaos) mixes of strategies. A single, unchanging final state may be unachievable. And the goal of a static equilibrium is even more elusive when a few mutant strategies are periodically inserted into the population.
Many simulations showed a cycle like this:
1) DU flourishes, taking advantage of forgiving strategies like CU. CU strategies nearly go extinct, but TFT strategies maintain themselves against DU.
2) The enclave of TFT Agents begins to flourish. Against DU, both sides suffer from the loss of mutual Defection. But encounters among TFT Agents themselves are mutually Cooperative, so the TFT Agents gain strength.
3) TFT finally dominates the mix of strategies, and this mix has some measure of stability. However, the nearly extinct CU strategies can now be productive. When CU strategies encounter TFT strategies, CU prospers as much as does TFT. Among themselves, the CU strategies actually gain a slight benefit because they never fall into the destructive pattern of mutual Defection that ITFT is subject to.
4) CU now prospers above all other strategies, as CU Agents reinforce each other more effectively than any other strategy. TFT begins to wane. But a CU environment is the perfect setting for a rogue DU strategy, which can now take advantage of nearly every Agent it encounters.
5) Preying upon the many CU's, like wolves upon sheep, the DU's once again dominate the mix of strategies, returning to step #1.
Sometimes this progression follows a different path at step #3. If the TFT strategies are forgiving, then they are no longer liable series of mutual Defections to which unforgiving ITFT's are prone. The GTFT's dominate the mix of strategies, and this forms a more stable equilibrium.
Even the slightest imperfection in strategy can produce drastically different results. One simulation started with the 16 possible vectors described by the class of level 1 Pavlovian strategies, with the introduction of minute error at each round. Neither TFT nor DU was able to dominate this environment. Instead, a rather odd strategy (P1) dominated when error was present.
The decision criteria for P1 is:
If the previous encounter was either a mutual Cooperation or a mutual Defection, then Cooperate in this encounter. Otherwise, Defect.
This strategy tends to eliminate all alternative strategies, yet plays well with itself. P1 with DU leads to a series of alternating mutual Defections and betrayals. DU does not thrive when half of its encounters are with mutual Defectors. P1 and TFT can mutually thrive only if they start off with mutual Cooperation. Any other initial state leads to a cycle of 2 betrayals followed by a mutual Defection. P1 thrives against itself, rapidly converging on mutual Cooperation.
PD analysis will probably never give us the best answers to "what is the best strategy for self-gain", or "what strategy best enhances ethical behavior". There are too many factors, especially in real-life, to permit a single solution.