The name Snowdrift.coop refers to the Snowdrift Dilemma, a metaphor from game theory (the study of strategic decision making).1
In a small neighborhood after a winter storm, a big snowdrift blocks the road. It’s far too much for one person to easily clear, and everyone has lots of other things to do; yet everyone needs the road cleared sooner or later.
If you go out to shovel right away, you can’t assume others will be so eager, and you’ll likely end up doing all or most of the work. So, you might wait to see if someone else will get started — perhaps then you’ll come help. Of course, you have other things to do. You’d be happy if other folks went and cleared the road without you.
If you know what others will do, then your choice to work or not is easier. So, a rational strategy is to wait and see what others choose before you decide. But if everyone does that, we get the worst case scenario: everyone waits for everyone else, and we see no progress.2
Often, someone who can’t wait any longer ends up shoveling alone until they can get through. Thus, the snowdrift gets partly cleared, but this one person took an unfair amount of the burden at great personal cost. If only we could all trust each other to cooperate right away, the work would get done sooner and in a more efficient and fair way.
Iteration changes default strategies
These dilemmas may lead to greater cooperation when played as ongoing, recurring situations. Basically, when there is a next time, then players start to consider how actions now will affect the future. In iterated games, players may recognize the value of demonstrating their good will by volunteering immediately because that can help build trust that leads to future cooperation (even though players risk betrayal on any one occasion).
In studies of such iterated games, the most typically successful strategy is tit-for-tat: Volunteer by default, but if the other player doesn’t cooperate, then the next time, don’t volunteer first. Hopefully, they will get the message. If they cooperate, then you will cooperate. Even with this strategy, players can fall into cycles of non-cooperation.
Game theories applied to real-life
Many real-world circumstances are like these game theory dilemmas such as the American and Soviet nuclear proliferation strategies during the Cold War.3 Economists use similar metaphors in discussing problems like the Tragedy of the Commons, a concept about the dilemmas in the production, maintenance, and consumption of public goods.
Voluntary contributions dilemma
When soliciting support for a FLO project, the same sort of collective action questions arise.
In the worst case, consider a project that runs a significant risk of failure due to insufficient resources. Without any guarantee of adequate help from others, any one person risks totally wasting their contributions (whether as a developer donating time and effort or as a patron donating funds). Given FLO terms, everyone will get the results of success whether or not they took any of the risk. So, a self-interested player will avoid getting involved in such projects. Maybe a project actually has adequate potential support, but because everyone hesitates to accept the risk, the project fails.
In a case where one person’s contribution can guarantee some success, things may work better. Perhaps one dedicated developer can keep a project going. Perhaps a large grant from a wealthy donor or corporate sponsor guarantees at least minimal resources. Such cases mean less risk of total loss for other donors and volunteers, but supporters may still hesitate because their input makes only minor impact at a significant personal cost. After all, everyone still gets the results of FLO projects whether or not they chip in. So, only a small fraction of those who appreciate FLO projects actively support them, and even successful projects typically struggle to get by with far less resources than ideal.
Crowdfunding campaigns partly address these dilemmas
Standard crowdfunding platforms like Kickstarter address these dilemmas by having donors pledge their support on the condition that everyone together reaches a preset fundraising goal. This threshold assurance is a factor in the successful crowdfunding boom. However, there are several problems with threshold campaigns.
Snowdrift.coop goes further
As noted above, iterated, ongoing situations encourage cooperation. Unlike one-time crowdfunding campaigns, the Snowdrift.coop community-wide matching pledge is an ongoing, iterated system that also provides mutual assurance
Addressing social psychology
Of course, real people are more complex than the rational self-interest model of classical economic game theories. In the real world, we have complex social motivations like honor, altruism, guilt, and revenge. In addition to solving the basic snowdrift dilemma, Snowdrift.coop is designed to address social psychological factors as well.
With any social contract (in our case, the agreement that I will do my part if you do yours), it helps to keep a public record and honor those who follow through. Furthermore, we need a culture that values altruism and focuses on a larger purpose. Snowdrift.coop considers both of these honor-based factors alongside our formal pledge.
As a branch of classical economics, game theorists may assume all participants to be self-interested rational actors. We know you aren’t like that, but considering the model is still insightful despite its limitations.↩
The snowdrift dilemma is usually discussed in contrast to the better-known Prisoner’s Dilemma, one of the best known problems in game theory. By comparison, the Prisoner’s Dilemma is worse and more intractable. Whereas the Snowdrift Dilemma leads to a strategy of wanting to know what others decide, the Prisoner’s Dilemma makes defecting (i.e. not cooperating) always the self-interested rational choice (for one-time, non-iterated games).
For those unfamiliar with the original Prisoner’s Dilemma:
Two prisoners have been charged with a crime. They are separated and each asked to confess. If both confess, they will both be convicted. If they both claim innocence, they will face a lighter charge. If one confesses and the other refuses, the one who confessed will go free; and their testimony will be used to convict the other prisoner — who will then get an extra harsh sentence for refusing to talk.
So, as a player in this game:
- If the other prisoner stays silent, you can confess and go free.
- If the other prisoner confesses, you had better confess as well — otherwise you’ll get an extra harsh sentence.
It doesn’t matter what the other player chooses! You should confess regardless. So, two rational prisoners will both confess in this game. Yet they would both be better off if they had both stayed silent! In other words, everyone prospers with cooperation than without it, but the game is designed so nobody will cooperate. The Snowdrift Dilemma is more nuanced and more likely to lead to cooperation, and, thankfully, many real-life situations are more like the snowdrift dilemma than the prisoner’s dilemma, but the details vary from case to case. An article at phys.org summarizes a relevant study titled “Human cooperation in social dilemmas: comparing the Snowdrift game with the Prisoner’s Dilemma”.↩