top of page
  • harry2180

The Art & Science of Judging a Humanitarian Challenge

Updated: Jan 13, 2023


My blog has been quiet over the past couple of months because each of the closed Habitat for Humanity and World Vision challenges has required multiple rounds of judging to determine the winning submissions. The quantity of quality submissions in each of the four challenges (two for each organization) has required iterative review and discussion to winnow the list down to the eventual winners.


The five-stage evaluation process is summarized below.


1. Initial pre-screening.

a. Objective: Remove all incomplete and spurious submissions so only “quality submissions” are reviewed by the project team.

b. Process: This step is performed by InnoCentive immediately after the challenge closes to the public. The InnoCentive Design Consultant reviews all submissions and eliminates any that are not professional or complete.


2. First-round evaluation.

a. Objective: Reduce the set of submissions to include only those that have true merit.

b. Process: The project leadership team determines a few initial evaluation criteria against which to initially evaluate the submissions and weights them by their relative importance. For example, one challenge used: a) Solution meets technical requirements (40%); b) Materials are locally available (30%); and c) Cost-effectiveness (30%). Another challenge used: a) Creativity (25%); b) General feasibility (25%); c) Completeness of proposal (25%); and d) Technical viability. A third challenge used: a) Has the Solver provided a detailed description of the proposed technology? (40%); b) Has the Solver provided rationale to support the proposed solution (30%); and c) Does the solution include any proof-of-concept data? (30%).


In each case, a small group of project-team members (3-5 individuals) reviews all of the quality submissions and evaluates each submission on a 1-5 scale (5 being the highest score) against the initial evaluation criteria. This process creates a ranked list of submissions with the most interesting at the top of the list.


A complete reading of a submission for this type of challenge averaged 10-15 minutes per quality submission. With the number of quality submissions ranging from 22 to 71 on these four challenges, the time required per reviewer ranged from 5-18 hours (per challenge). The intensity of the work meant that it had to be done in chunks with breaks after an hour or two. The process required a number of days of calendar time (per challenge).


3. 2nd-round evaluation.

a. Objective: Stringently evaluate each submission to select a small group of ‘finalists’. The challenge winner(s) will be selected from the finalist group.

b. Process: The processes for the different challenges diverged at this step, depending on the number and variety of submissions. One challenge went directly to the next step.


For the challenges that needed this step, all of them looked to expand the number of judges and to diversify the skills and perspectives of the judges. In one case, the challenge team recruited five judges from partner organizations with specific engineering skills. In another case, the team recruited six experts on a national level with expertise in the challenge domain. A third situation recruited five judges from inside the global organization that were not part of the project team.


When judges were from outside of the organization, the external experts participated in determining the evaluation criteria for the 2nd round. When judges were from inside of the organization, the project team provided evaluation criteria AND advice for scoring so that there could be standardization of scoring scales. This guidance was delivered in a matrix format.


Involving additional people into the evaluation process added calendar time to the judging effort. The added time ranged from 1-2 months.


4. Engagement with Finalists.

a. Objective: Gather more information about the proposal of each finalist.

b. Process: Two of the four challenges asked finalists to provide additional information about their submission. One of the challenges asked the finalists to build a proof-of-concept prototype. Another challenge asked finalists to provide a demonstration video of their concept.


Not all finalists were willing or able to respond to these requests. The ones that were able to respond typically needed 6-8 weeks to do so. COVID-19 extended the timelines needed by Solvers to respond to these requests.

5. 3rd-round evaluation.

a. Objective: Select one (or more) winners of the challenge.

b. Process: At the moment of writing this blog, only one of the challenges has selected their winners. In this case, the winning Solvers are currently being vetted by InnoCentive before a public announcement is made. The other challenges are still in the final evaluation process but expect to select winners and make a public announcement in the coming weeks.


The process of eliminating finalists is a difficult one. Each finalist provided a strong submission. However, prize money can only be awarded to one (or a couple of) Solver(s) so the project team needs to create a way to rank the finalists. In one case, after much discussion, the team used the question “Which finalist would I be able to effectively field-test?” as a way to focus the evaluation on the true end-goal: impacting communities.


This step took about 1 week in the one case where it has been completed.


All told, the evaluation process from beginning to end for these four challenges will each take 4-6 months. That is, the elapsed calendar time from the closing date of challenge submissions until the possibility of a public announcement of challenge winners takes about the same amount of calendar time as designing the challenge and soliciting challenge submissions.


Analysis


First, it is important to note that the evaluation process occurred during a period with high COVID-19 infection rates and deaths. The evaluation process in India had to stop for a while because of the impact of COVID-19 on the families of project leaders and participants. In a scenario without COVID-19, a number of project delays would not have occurred.


It is highly likely that a commercial organization would be able to dramatically accelerate this timeline. If a company engages in crowd-solving because it needs an answer to a problem, it will not permit the delays that affected these humanitarian challenges. Instead, it will actively work through each phase to be able to select a winner and engage with the winner to immediately utilize the solution. My guess is that it would only require 2-4 weeks of elapsed time for a commercial organization to review and rank submissions and select a winner.


It is also quite likely that a humanitarian organization could accelerate the timeline by assigning dedicated resources to the process. In the case of these four challenges, the crowd-solving initiative was a side-project and did not get top priority. Changing that positioning could significantly accelerate the evaluation process.

76 views0 comments
bottom of page