top of page
  • harry2180

Case Study in Humanitarian Crowd-Solving: IRC in 2023

Updated: May 6

Summary:


This blog post analyzes the recently-completed humanitarian challenge run by the International Rescue Committee and compares it to humanitarian challenges previously run by Habitat for Humanity and World Vision. An outline of the main points is below.



1. INTRODUCTION

Is two years a long time or a short time? It has been that long since the final World Vision (“WV”) open-innovation challenge stopped accepting submissions and our public involvement in humanitarian crowd-solving with Habitat for Humanity (“HFH”) and World Vision ended. To me, it doesn’t feel like a long time because there has been active follow-up on a number of the eight challenges we did together, but that is the subject of a different blog that I will write in the near future.


In the world of humanitarian crowd-solving, a lot can change in two years! In this blog I want to tell you about what is new in the process and what stayed the same. The International Rescue Committee (“IRC”) just completed their first humanitarian crowd-solving challenge and will be announcing the results in the near future. They were looking for ideas on how to make female toilets in refugee camps safer and more desirable. SeaFreight Labs assisted the effort as Project Advisor and we participated in the entire successful project. We used Wazoku (formerly InnoCentive, www.wazoku.com) as our open-innovation platform for this (and all prior) challenge(s).


First, a definition. “Humanitarian Crowd-Solving” is an effort by a large global humanitarian organization to leverage the global crowd to help them solve high-impact internal problems in a cost-effective way. Often, the organization has been working on an issue for a long time and is not satisfied with their internal progress. Soliciting new ideas and assistance from outside the organization can often lead to new breakthroughs that lead to promising further development.


Other definitions. The ”seeker” is the humanitarian organization. The “solver” is someone – usually a total stranger to the organization – that submits a potential solution to the seeker’s published challenge. If a submission is awarded prize money, the solver then is called a “winner”.


The general process is straightforward. The organization solicits their internal staff for ‘crowd-suitable’ problems (click HERE for my blog on this topic). Once they find such a problem, they define what a good solution would look like, create a ‘challenge statement’ and determine the cash prize that they will award if a solver completely delivers on the stated requirements. The challenge is published to the global crowd and aggressively marketed to attract as many potential solvers as possible during the ‘open period’ of 45-90 days. Humanitarian challenges we have been involved in have attracted 22-80 ‘quality’ submissions that are then evaluated by the seeker to determine if the cash prize should be awarded. Seven out of nine (78%) humanitarian challenges we have participated in have awarded prize money to one or more solvers.


So, the process works. Let me tell you how it is getting even better as we get more experience with this process of focused innovation.


2. WHAT’S NEW?


a. Changes at Wazoku


Over the past few years, after the acquisition of InnoCentive, there has been an effort at Wazoku to create more community in their crowd. This strategic shift drives Wazoku to interact more often, with more detail, and more openly, with their crowd of hundreds of thousands of solvers. This catalyzed a number of important changes in how humanitarian crowd-solving works and I think it led to the improved quantity and quality results that we saw in the most recent IRC challenge. Let me tell you about the most important of these changes.


i. Wazoku now encourages shorter, and more, challenges.

Two years ago, each challenge was thought of as a ‘fixed piece of work’: a problem was brought to the crowd, an answer was obtained and the solver could use that answer for its own purposes. There was little to no thought of going back to the crowd to follow up on something that the challenge had uncovered. More recently, Wazoku has changed their thinking about the crowd and how a seeker can best utilize it. Now, they encourage the seeker to think of the crowd as a resource that is available for repeated consulting. This philosophy causes the seeker to want to shorten the duration of the solicitation period for a challenge so that the answers are available faster and the next challenge can be run sooner. The most recent IRC challenge was only open for 48 days. This is the shortest open duration of any of the humanitarian crowd-solving challenges we have been involved in. However, even with the shortened duration, the challenge still generated active engagement with solvers, as is visible in Figure 1.

Figure 1.

ii. Potential Solvers can read the whole challenge statement without registering.

In the HFH and WV challenges, the potential solver was forced to register for a challenge and accept the challenge-specific IP agreement BEFORE they could read the complete details of the challenge. This requirement probably drove a higher count of challenge registrations because curious people would need to register just to see if they might be able to provide a solution. But, I don’t think it had any positive effect on the quality or quantity of the eventual submissions. In the most recent IRC challenge, the entire challenge definition was viewable by the public without registration. This allowed everyone to read the IRC objectives; they only had to register for the challenge if they wanted to know the IP expectations of the IRC and have the opportunity to submit a solution to the challenge. Although the registrant count for the IRC challenge ended up at a similar place as the prior HFH and WV challenges, it is important to remember that the action being counted has changed its meaning in the two years since the HFH and WV challenges ran. The impact of this difference is dramatically evident in the submission yield of the IRC challenge compared to the HFH and WV challenges. Submission yield is the number of submissions received for a challenge divided by the number of registered solvers. Quality yield is the number of ‘quality’ submissions divided by the number of registered solvers. The IRC experience on their first challenge was over twice the average of the eight HFH and WV challenges, as shown in Figure 2.

Figure 2.


iii. More aggressive challenge promotion by Wazoku.


During the HFH and WV challenges a couple of years ago, Wazoku solely promoted each challenge to their crowd via their weekly newsletter. They highlighted each challenge when it first launched and when it was ready to close. That led to about 50% of registrations and submissions that each challenge received[i].


In the most recent IRC challenge, Wazoku continued promotion in their weekly newsletter and this generated a similar response to what we obtained in prior challenges. Wazoku supplemented this with 2 new programs that allowed potential solvers to directly engage with the seekers to better understand the challenge objectives. These were:


  • Focused webinar. Wazoku hosted a 36-minute webinar for potential solvers to directly ask questions of the seekers. Watch the full webinar HERE. It had 27 solvers in attendance at peak attendance.

  • Follow-up public Q&A. Wazoku encouraged on-going Q&A from potential solvers so they could fully understand the IRC objectives. In the 2nd IRC challenge, Wazoku appended the questions and answers to the end of the challenge definition so that everyone could benefit from the clarifications provided by the seekers. Scroll to the end HERE for an example of this.

iv. Elimination of the “Wall of Anonymity” for Solvers.


The InnoCentive model of crowd-solving created a barrier between seekers and solvers that was not pierced until an award was made to one or more winners of a challenge. The intention of this anonymity was to ensure total objectivity by eliminating all opportunities for bias during judging. This strategy was used during all of the HFH and WV challenges.


In this most recent IRC challenge, Wazoku relaxed the enforcement of this strategy. Wazoku remained as the ‘middle-man’ between seekers and solvers but facilitated controlled interactions between the two groups while the challenge was open. Both the webinar and Q&A described in the prior section were possible because of this strategy change by Wazoku.


I think this change contributed to the significantly higher ‘quality yield’ seen in Figure 2, when compared to the prior HFH and WV challenges.


b. Changes in the IRC Process vs HFH/WV Process: Acceleration!


The IRC benefited from all of the hard-earned experience of the HFH and WV challenges. Because we knew potential roadblocks in advance of each process phase, the IRC project leadership was able to mitigate potential issues and keep the project on a tight schedule. Figure 3 shows the dramatic impact of this expertise and leadership.

Figure 3.


The total duration of the challenge from the start of the challenge design to the final internal decisions on winners took only 195 days (7 months) for the IRC challenge looking for new thinking on lighting, locking and alerting. This compares very favorably to the average duration of the prior 7 HFH/WV relevant challenges that took an average of 378 days (12 months) for the same set of processes. This is a HUGE improvement!


There are a few key project-management actions that delivered most of this acceleration.


  • Shorter open period for the challenge. The IRC challenge was open to the public for 48 days while all of the HFH & WV challenges were open for 91-94 days. This savings of ~44 days knocked 1.5 months off the project duration.

  • Faster judging in each round. The IRC used just 2 people to do initial screening in round 1 of judging. The solver leadership team created evaluation criteria BEFORE the closing date of the challenge and this enabled the team to begin judging immediately after the challenge’s close. The solver leadership team also made advance plans on how to involve the organization in the final round of evaluations so that they could execute them as soon as round 2 was completed and they had their list of finalists.

These two project-management changes reduced the average number of days required for judging from 180 (6 months) to 64 (2 months). This is a real tribute to the leadership at IRC and their ability to absorb lessons-learned from the HFH and WV experiences.


3. WHAT’S STAYED THE SAME?


a. The classic “Innovation Funnel” still works


The IRC challenge reproduced the diversity and reach of prior challenges run by HFH and WV (see Figure 4). I continue to be amazed that a well-publicized challenge run on a professional platform like Wazoku’s can attract attention from potential solvers from all around the world. Even better, these solvers submit thoughtful and valuable ideas with great intellectual diversity. I will write about this diversity in a future blog.


This section will discuss a few of the project strategies that led to this fantastic outcome.


Figure 4.


b. Seeker marketing is vital


During the 48 days of open solicitation, the IRC challenge page for the Female Latrines challenge counted 5,204 registered sessions from 2,277 identified users. This is a result of the marketing efforts of Wazoku, the IRC and various partners that offered marketing assistance to the IRC, often on a pro-bono basis. It is amazing that so many people could unite around a pressing humanitarian problem like this one.


The IRC was aggressive in promoting the challenge via many different channels. This included the following:



c. External marketing is also vital


This challenge was supported by a variety of like-minded organizations that wanted to help the IRC attract potential solvers to the challenge. These partners included:


  • HeroX (www.herox.com). HeroX hosted the challenge on their platform and provided a direct link to solvers from which they could submit. Their posting is accessible HERE.

  • Engineering for Change (www.engineeringforchange.org). E4C publicized the challenge through a newsletter to their entire membership.

  • MIT SOLVE (solve.mit.edu). MIT SOLVE publicized the challenge through a newsletter to their entire membership.

  • Make.com. An ad was run in the monthly Maker newsletter to publicize the challenge.

The activity of the IRC and our marketing partners led to active engagement from outside the Wazoku crowd (see Figure 5). About 50% of solvers at every funnel stage were new to Wazoku because of the IRC challenge. This led to an impressive 43% of the winning solvers being from outside of the Wazoku crowd.


Figure 5.


d. The crowd really is global


It may seem like it should be easy to solicit ideas from all around the world. However, before the invention of crowd-solving by Innocentive/Wazoku over 20 years ago, it was not possible. Now, with the passage of time and the dedicated focus of organizations like Wazoku, an organization like IRC can simply publish a challenge, wait 48 days, and read scores of focused responses.


This challenge had registered solvers from 60 countries! They came from every continent. The submissions came from 44 countries. And the winners came from 7 different countries in Europe, North America, Asia and Oceania. Truly amazing!


e. Solvers still submit close to the deadline


Even with all of the changes that occurred between the time of the HFH and WV challenges and the current IRC challenge, the behavior of solvers with regards to the submission deadline did not change (see Figure 6). About 40% of the submissions came in during the last 3 days of the challenge and 25-30% of the submissions came in during the last day.


Figure 6.


[i] SeaFreight Labs supplemented these efforts with marketing partnerships (click HERE to read more about these).

109 views0 comments

Comentarios


bottom of page