Competitive Collaboration or Collaborative Competition

Howard Fisher in the Transmission Project’s newly released report “Back to the Source: How Collaboration Can Transform Online Engagement“:

In 2010, The John S. and James L. Knight Foundation pioneered the use of open commenting on its Knight News Challenge. The Challenge aims to spur innovation around providing information to communities using digital, open-source technology. Anyone who has access to Knight’s website can apply, and last year applicants had the option of applying openly – that is, making their application available to be read, rated, and commented on by visitors to the website. By doing so, applicants might benefit from the public’s feedback. Between the time applicants posted their proposals and the December 1 deadline, they could change their proposals to reflect improvements based on others’ suggestions. Because Knight’s News Challenge drew experience from the crowd in the form of feedback, allowed users to rate projects, and comprised a funding contest, it exemplifies a hybrid model that includes elements of the crowdsourcing approaches Kanter terms wisdom, voting, and funding.

Recognizing the gap between foundations and the work of organizations, Knight director of Digital Media Grants John Bracken says that in crowdsourcing feedback, the foundation saw the opportunity to bring expertise from the field to its technology initiative. In an online Q&A, Bracken likewise says Knight was looking “to be surprised and see things we haven’t seen before.” He acknowledges, however, that he doesn’t think Knight executed its crowdsourcing effort well.

In Knight’s case, increased inclusiveness came into conflict with the structure of a funding contest. The Challenge’s website conveys mixed messages about how the public’s feedback influences the review and judging process. On one hand, the site stated as of June 6, 2011 (before winners were announced), “applicants who choose the public option will not receive preferential consideration. Likewise, those who choose the closed option will not be penalized.” On the other hand, the site elaborated, “the public rating and commenting is by no means the only parameter we use to choose the best projects. We give more weight to our panel of experts,” who gauge projects’ potential impact based on how well it fits into one of the predetermined categories “mobile,” “authenticity, “sustainability,” or “community.” The website implies that review panelists do give some weight to public comments even as it denies showing preference for open submissions. Lack of transparency and intentionality regarding the rating and commenting system limited the benefits Knight could reap from its use of crowdsourcing.

To its credit, Knight reflects on the complications of crowdsourcing feedback in the context of a funding competition. In particular, its FAQ about the section pointed to the bias of users: “We hope everyone is acting in good faith, but we understand that applicants can subjectively rate other entrants’ projects.” This implies users could leave negative feedback on their competitors’ projects in an attempt at subterfuge. Likewise, Knight addresses concerns about viewers stealing participants’ ideas:

It’s the trade-off for having the opportunity to use the wisdom of the crowd to improve your entry… Submitting an ‘open’ application means you are either confident enough in your own abilities and track record that you’ll be chosen to do the work even if others have similar ideas, or that you don’t really care who does the work as long as it gets done.

Here Knight anticipates the central tension that crowdsourcing introduces to a contest. By sponsoring a funding challenge, Knight hopes to drive social innovation by encouraging healthy competition among innovators. However, opening up innovators’ ideas to public collaboration would seem to undermine the spirit of competition; even as Knight highlights how crowd wisdom can improve a project and make it a more competitive candidate, it instructs applicants to only include the crowd if they “don’t really care who does the work as long as it gets done.” In the end, Knight resolves this tension by subordinating the role of the crowd: getting public support is made optional rather than essential to the success of a project. True to its name, Knight’s initiative is a challenge first and foremost.

The challenges of resolving these tensions not only indicate that practical applications of crowdsourcing are still experimental, but also reiterate what experts like Geoff Livingston have said: “While the crowd craves freedom, it desperately needs structure. People need to be told how to participate and the rules of engagement. These rules have to be clear, empowering of the crowd, and directive in their end result.” Effective use of crowdsourcing requires a great deal of intentionality and structure. Knight provides plenty of guidance for applicants but not for commentators. Moreover, the crowd needs to be the hero. Its contributions would have to equal if not supersede in importance the ideas of innovators as the focus of the funding process. Indeed, Knight’s Challenge merely emphasizes what is already true about applying for funding – that it is a competitive undertaking.

The conflict between “collaboration” and “competition” really bothers me on a rhetorical level about these app contests. They use the evolutionary language of competitive innovation yet often rely on collegial cooperation and altruism (not to mention existing capacity) as the efforts required to participate may significantly outweigh the probability for prizes or benefits. The idea of “online” has become intrinsically tied to democratic participation (and other feel-goody stuff like collaboration) when—except when specifically designed for with much effort and forethought—it’s nothing of the sort.

On the other hand, while public collaboration may be difficult to organize, private collaboration between participating developers can be relatively easy yet awesome. Whether private collaboration is the result of physical closeness (such as at a weekend hackathon) or electronic (such as a listserv set up for participants and organizers), creating a space where participants can ask and answer questions, share ideas, gently boast about their progress and implicitly network can provide fulfilling benefits for participation than just the prize at the end. From my participation in the Boston Hackday Challenge (physical collaboration) and  DonorsChoose Hacking Education (listserv-based collaboration) winning a prize becomes more of the icing than the cake. Of course, these contests I’ve been involved with have also kept their bloviations about creating the “future of whatever” to a minimum, so maybe that has something to do with it too.

I could also go into how these contests may (briefly) fulfill many of the human and social needs that current technology careers lack, but that’s another blog post.

Update: My boss Belinda suggested I note that the Knight Foundation has previously funded DonorsChoose (she received a DonorsChoose GivingCard at a Knight conference).

Leave your comment