about/

The Snowdrift Dilemma

The name Snowdrift.coop refers to the Snowdrift Dilemma, a metaphor from game theory (the study of strategic decision making).1

Snowdrift blocks the road

In a small neighborhood after a winter storm, a big snowdrift blocks the road. It’s too much for one person to easily clear, and everyone has lots of other things to do; yet everyone needs the road cleared sooner or later.

Will you help clear it?

If you go out to shovel right away, you can’t assume others will be so eager, and you’ll likely end up doing all or most of the work. So, you might wait to see if someone else will get started — perhaps then you’ll come help. Of course, you have other things to do. You’d be happy if other folks went and cleared the road without you.

Who gets started first?

If you know what others will do, then your choice to work or not is easier. So, a rational strategy is to wait and see what others choose before you decide. But if everyone does that, we get the worst case scenario: everyone waits for everyone else, and we see no progress.2

Often, someone who can’t wait any longer ends up shoveling alone until they can get through. Thus, the snowdrift gets partly cleared, but this one person took an unfair amount of the burden at great personal cost. If only we could all trust each other to cooperate right away, the work would get done sooner and in a more efficient and fair way.

Iteration changes our strategies

These dilemmas may lead to greater cooperation when played as ongoing, recurring situations. Basically, when there is a next time, then players start to consider how actions now will affect the future. In iterated games, players may recognize the value of demonstrating their good will by volunteering immediately. We want to build trust that leads to future cooperation (even though we risk betrayal on any one occasion).

In studies of such iterated games, the typically successful strategy is tit-for-tat: Volunteer by default, but if the other player doesn’t cooperate, then the next time, don’t volunteer first. Hopefully, they will get the message. If they cooperate, then you will cooperate. Even with this strategy, players can fall into cycles of non-cooperation.

Game theories applied to real-life

Many real-world circumstances are like these game theory dilemmas such as the American and Soviet nuclear proliferation strategies during the Cold War.3 Economists use similar metaphors in discussing problems like the Tragedy of the Commons, a concept about the dilemmas in the production, maintenance, and consumption of public goods.

Voluntary contributions dilemma

When soliciting support for a FLO project, the same sort of collective action questions arise.

In the worst case, consider a project that runs a risk of failure due to insufficient resources. Without any guarantee of adequate help from others, any one person risks totally wasting their contributions (whether as a developer donating time and effort or as a patron donating funds).

Given FLO terms, everyone will get the results of success whether or not they took any of the risk. So, a self-interested player will avoid getting involved in projects that may fail. Thus, even when a project actually has adequate potential support, it can fail simply because everyone hesitates to accept the personal risk of contributing.

In a case where one person’s contribution can guarantee some success, things may work better. Perhaps one dedicated developer can keep a project going. Perhaps a large grant from a wealthy donor or corporate sponsor is enough to have minimum necessary resources. Such cases mean less risk for other donors and volunteers, but supporters may still hesitate because their input makes only minor impact at a significant personal cost.

Regardless of the risk, everyone still gets the results of FLO projects whether or not they chip in. So, only a small fraction of those who appreciate FLO projects actively support them. Even successful projects typically struggle to get by with far less resources than ideal.

Crowdfunding campaigns partly address these dilemmas

Standard crowdfunding platforms like Kickstarter address these dilemmas by having donors pledge their support on the condition that everyone together reaches a preset fundraising goal. This threshold assurance is a major factor in the successful crowdfunding boom. However, there are several problems with threshold campaigns.

Snowdrift.coop goes further

As explained above, ongoing situations encourage cooperation. Unlike one-time crowdfunding campaigns, the Snowdrift.coop crowdmatching pledge provides both mutual assurance and sustained, ongoing support.

Addressing social psychology

Real people are more complex than the rational self-interest model of classical economic game theories. We have complex social motivations like honor, altruism, guilt, and revenge. Besides solving the basic snowdrift dilemma, Snowdrift.coop is designed to address social psychological factors as well.

With any social contract (in our case, the agreement that I will do my part if you do yours), it helps to keep a public record and honor those who follow through. We also need a culture that values altruism and focuses on a larger purpose. Snowdrift.coop considers both of these honor-based factors alongside our formal pledge.


  1. As a branch of classical economics, some game theorists assume all participants to be self-interested rational actors. We know you aren’t like that, but the model remains insightful despite its limitations.

  2. The snowdrift dilemma is usually discussed in contrast to the better-known Prisoner’s Dilemma, one of the best known problems in game theory. By comparison, the Prisoner’s Dilemma is worse and more intractable. Whereas the Snowdrift Dilemma leads to a strategy of wanting to know what others decide, the Prisoner’s Dilemma makes defecting (i.e. not cooperating) always the self-interested rational choice (for one-time, non-iterated games).

    For those unfamiliar with the original Prisoner’s Dilemma:

    Two prisoners have been charged with a crime. They are separated and each asked to confess. If both confess, they will both be convicted. If they both claim innocence, they will face a lighter charge. If one confesses and the other refuses, the one who confessed will go free; and their testimony will be used to convict the other prisoner — who will then get an extra harsh sentence for refusing to talk.

    So, as a player in this game:

    • If the other prisoner stays silent, you can confess and go free.
    • If the other prisoner confesses, you had better confess as well — otherwise you’ll get an extra harsh sentence.

    It doesn’t matter what the other player chooses! You should confess regardless. So, two rational prisoners will both confess in this game. Yet they would both be better off if they had both stayed silent! In other words, everyone prospers with cooperation than without it, but the game is designed so nobody will cooperate. The Snowdrift Dilemma is more nuanced and more likely to lead to cooperation, and, thankfully, many real-life situations are more like the snowdrift dilemma than the prisoner’s dilemma, but the details vary from case to case. An article at phys.org summarizes a relevant study titled “Human cooperation in social dilemmas: comparing the Snowdrift game with the Prisoner’s Dilemma”.

  3. http://www.strategicstudiesinstitute.army.mil/Pubs/display.cfm?pubid=585