After years of diligence and perseverance, multinational team invented an algorithm for movie-recommendation that improves upon Netflix’s current algorithm by over 10%, beating the many other teams that vied for the same achievement. But this is all part of a contest: The team that accomplishes the goal (10% improvement in accuracy) and maintains their lead for a month wins one million dollars. The rest have just wasted their time.

Crowdsourcing is like an all-pay auction. One person benefits, and everyone else loses. In this case, even the team that “wins” isn’t necessarily winning. Think about it: Netflix is a 2 billion-dollars-a-year business. If you start a competing company whose product recommendations are over 10% better than that of the leading competitor (Netflix), don’t you think that you will get at least 0.01% of the 2 billion dollar business? That’s 200,000 dollars, every year. If you don’t think you can get even 0.01% of the business, then your algorithm isn’t good enough.

Speaking of algorithms, I am curious how they determined the 10% rate, and how efficient the algorithm is. If it takes twice as much computing power to produce a recommendation that’s 10% better, perhaps it does not make business sense (to double the cluster size). If it’s an NP-hard algorithm that works only on small samples, then it’s obviously not going to scale to all of Netflix. The semantics of the competition are important–so what are they?

I do wonder though: For those who participate in Crowdsourcing projects, do they expect to win? If so, how do they react to losing? If not, then are they participating just “for fun” or the experience? Really–what’s the incentive?