top of page
  • harry2180

Does CrowdSolving Work? – Round 1 of Evaluation

Updated: Jan 13, 2023

The first round of evaluation of the Habitat for Humanity Earthquake and Typhoon Resilience Challenge[1] and the World Vision Improved-Sanitation Challenge[2] is now almost complete (see Figure 1 for an example of the process of challenge evaluation). What an exciting process!

Figure 1. Typical process for Crowd-Solving Challenge Evaluation.


The Process


For each challenge, InnoCentive screened out incomplete submissions so that an initial judging team would read only high-quality submissions. Across both challenges, the average screening loss was about 45% of the total number of submissions. So, for example, the Habitat challenge had a total of 82 initial submissions, and 44 were forwarded on to the judges (54%).


The quality of these remaining submissions was universally high. It is obvious that each Solver spent considerable time considering the challenge’s problem and submitting a thoughtful solution. For an RTP (Reduce-to-Practice) challenge like Habitat’s, some submissions included CAD drawings. One even included results of a shake-table test that simulated an earthquake. The longest was 62 pages! For an Ideation challenge like World Vision’s, many submissions were more general but a number included complete engineering drawings and instructions for their proposed solution. I was truly amazed by the effort of so many around the world to contribute their expertise to solving these humanitarian challenges.


At both Habitat for Humanity and World Vision, the challenge owner was asked to define 3 initial-review criteria that could be used to filter the submissions to find the ones with the “most valuable” ideas. This is the vital step in the process where the Seeker works to efficiently review the submissions to drive success in the project. In the cases of Habitat and World Vision, their teams distilled their objectives down to big-picture questions like: “How much does it cost?” or “How scalable is the solution?” or “What expertise is required for implementation?”.


Each challenge has 4-5 initial screeners that go through all the high-quality submissions to score each submission against the initial-review criteria. Each judge does a careful reading of each submission and scores it from his/her perspective. In my experience, it took an average of 11-13 minutes per submission to read the submission, look at the attached drawings and exhibits and truly understand what the Solver was proposing. The process of reading each submission was truly one of eye-opening discovery as I was frequently amazed by the creativity of so many unique Solvers.


The InnoCentive system averages the individual scores from each judge and provides a sorted ranking of the submissions as the output from this step in the process. In each challenge, 40-50% of the submissions scored very highly on the initial-screening criteria (above 4.0 on a 5-point scale; see Figure 2 for an example of the score distribution). So, this step reduced the number of submissions to a more manageable set that can progress to the next stage of professional review. Reducing the set to the submissions that largely meet the initial-screening criteria is a way to optimize the use of time of the professional judges and drive toward the best solutions.

Figure 2.


Analysis


From one perspective, a count of 44 (Habitat) or 72 (World Vision) high quality submissions might sound like too few; why can’t the global crowd produce a higher number? But, after reading all of the submissions in each challenge, I can say that focusing on this number is the WRONG emphasis. Since our objective in each challenge is to solve the stated problem, the RIGHT focus is to think about the creativity and diversity of the ideas.


In both challenges, the global crowd hit it out of the park on these criteria. The Habitat challenge saw high-quality ideas that fall into six distinct categories that are substantially different. I am eager to hear the feedback of industry experts about which ones have already been tried and which are truly new. The World Vision challenge likewise received a broad diversity of ideas.


The end goal of each challenge is to receive a previously-untried idea that can be implemented in the field to improve the lives of tens or hundreds of thousands of people. This is an audacious goal which one should expect will be difficult. However, nothing about the results so far suggests that it will be impossible.


Stay tuned for my next blog update!


Footnotes

[1] The Habitat for Humanity (www.habitat.org) RTP challenge was launched on 7 October 2020 and closed on 5 January 2021. It had 267 registered solvers and received 82 submissions. [2] The World Vision (www.worldvision.org) ideation challenge was launched on 14 October 2020 and closed on 12 January 2021. It had 525 registered solvers and received 126 submissions.

96 views0 comments
bottom of page