Skeptic Friends Network

Username:
Password:
Save Password
Forgot your Password?
Home | Forums | Active Topics | Active Polls | Register | FAQ | Contact Us  
  Connect: Chat | SFN Messenger | Buddy List | Members
Personalize: Profile | My Page | Forum Bookmarks  
Home Rationally Speaking N. 8, March 2001: Game Theory, Rational Egoism and the Evolution of Fairness
Menu
Skeptic Forums
Skeptic Summary
The Kil Report
Skeptillaneous
Creation/Evolution
About Skepticism
Fan Mail
Skepticality
Rationally Speaking
Claims List
Skeptic Links
Book Reviews
Gift Shop
Staff


Server Time: 21:32:53
Your Local Time:



Rationally Speaking
science,philosophy,scientific method, natural selection
Printer Friendly Printer Friendly Version of this Article... Bookmark Bookmark This Article...


N. 8, March 2001: Game Theory, Rational Egoism and the Evolution of Fairness


This column can be posted for free on any appropriate web site and reprinted in hard copy by permission. If you are interested in receiving the html code or the text, please send an email

Massimo, based upon some interesting research, shows that selfishness can be bred out of social animals.


Is it rational to be ethical? Many philosophers have wrestled with this most fundamental of questions, attempting to clarify whether humans are well served by ethical rules or whether they weigh us down. Would we really be better off if we all gave in to the desire to just watch out for our own interests and take the greatest advantage to ourselves whenever we can? Ayn Rand, for one, thought that the only rational behavior is egoism, and books aiming at increasing personal wealth (presumably at the expense of someone else’s wealth) regularly make the bestsellers list.

Plato, Kant, and John Stuart Mill, to mention a few, have tried to show that there is more to life than selfishness. In the Republic, Plato has Socrates defending his philosophy against the claim that justice and fairness are only whatever rich and powerful people decide they are. But the arguments of his opponents — that we can see plenty of examples of unjust people who have a great life and of just ones who suffer in equally great manner — seem more convincing than the high-mindedness of the father of philosophy.

Kant attempted to reject what he saw as the nihilistic attitude of Christianity, where you are good now because you will get an infinite payoff later, and to establish independent rational foundations for morality. Therefore he suggested that in order to decide if something is ethical or not one has to ask what would happen if everybody were adopting the same behavior. However, Kant never explained why his version of rational ethics is indeed rational. Rand would object that establishing double standards, one for yourself and one for the rest of the universe, makes perfect sense.

Mill also tried to establish ethics on firm rational foundations, in his case improving on Jeremy Bentham’s idea of utilitarianism. In chapter two of his book Utilitarianism, Mill writes: “Actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness.” Leaving aside the thorny question of what happiness is and the difficulty of actually making such calculations, one still has to answer the fundamental question of why one should care about increasing the average degree of happiness instead of just one’s own.

Things got worse with the advent of modern evolutionary biology. It seemed for a long time that Darwin’s theory would provide the naturalistic basis for the ultimate selfish universe: nature red in tooth and claw evokes images of “every man for himself,” in pure Randian style. In fact, Herbert Spencer popularized the infamous doctrine of “Social Darwinism” (which Darwin never espoused) well before Ayn Rand wrote Atlas Shrugged.

Recently, however, several scientists and philosophers have been taking a second look at evolutionary theory and its relationship with ethics, and are finding new ways of realizing the project of Plato, Kant, and Mill of deriving a fundamentally rational way of being ethical. Elliot Sober and David Sloan Wilson, in their Unto Others: The Evolution and Psychology of Unselfish Behavior, as well as Peter Singer in A Darwinian Left: Politics, Evolution, and Cooperation, argue that human beings evolved as social animals, not as lone, self-reliant brutes. In a society, cooperative behavior (or at least, a balance between cooperation and selfishness) will be selected in favor, while looking out exclusively for number one will be ostracized because it reduces the fitness of most individuals and of the group as a whole.

All of this sounds good, but does it actually work? A recent study published in Science by Martin Nowak, Karen Page and Karl Sigmund provides a splendid example of how mathematical evolutionary theory can be applied to ethics, and how in fact social evolution favors fair and cooperative behavior. Nowak and coworkers tackled the problem posed by the so-called “ultimatum game.” In it, two players are offered the possibility of winning a pot of money, but they have to agree on how to divide it. One of the players, the proposer, makes an offer of a split ($90 for me, $10 for you, for example) to the other player; the other player, the responder, has the option of accepting or rejecting. If she rejects, the game is over and neither of them gets any money.

It is easy to demonstrate that the rational strategy is for the proposer to behave egotistically and to suggest a highly uneven split in which she takes most of the money, and for the responder to accept. The alternative is that neither of them gets anything. However, when real human beings from a variety of cultures and using a panoply of rewards play the game the outcome is invariably a fair share of the prize. This would seem prima facie evidence that the human sense of fair play overwhelms mere rationality and thwarts the rationalistic prediction. On the other hand, it would also provide Ayn Rand with an argument that most humans are simply stupid, because they don’t appreciate the math behind the game.

Nowak and colleagues, however, simulated the evolution of the game in a situation in which several players get to interact repeatedly. That is, they considered a social situation rather than isolated encounters. If the players have memory of previous encounters (i.e., each player builds a “reputation” in the group), then the winning strategy is to be fair because people are willing to punish dishonest proposers, which increases their own reputation for fairness and damages the proposer’s reputation for the next round. This means that — given the social environment — it is rational to be less selfish toward your neighbors.

While we are certainly far from a satisfying mathematical and evolutionary theory of morality, it seems that science does, after all, have something to say about optimal ethical rules. And the emerging picture is one of fairness — not egotism — as the smart choice to make.



Read or Add Comments about Rationally Speaking



Massimo’s other ramblings can be found at his Skeptic Web.

Massimo’s books:

Denying Evolution: Creationism, Scientism, and the Nature of Science




Tales of the Rational: Skeptical Essays About Nature and Science


Back to Rationally Speaking



The mission of the Skeptic Friends Network is to promote skepticism, critical thinking, science and logic as the best methods for evaluating all claims of fact, and we invite active participation by our members to create a skeptical community with a wide variety of viewpoints and expertise.


Home | Skeptic Forums | Skeptic Summary | The Kil Report | Creation/Evolution | Rationally Speaking | Skeptillaneous | About Skepticism | Fan Mail | Claims List | Calendar & Events | Skeptic Links | Book Reviews | Gift Shop | SFN on Facebook | Staff | Contact Us

Skeptic Friends Network
© 2008 Skeptic Friends Network Go To Top Of Page
This page was generated in 0.05 seconds.
Powered by @tomic Studio
Snitz Forums 2000