Humanitarian Crowd-Solving Case Study: IRC 2023-2025
- harry2180
- Sep 23
- 9 min read
Summary: This document is a case study of the humanitarian crowd-solving project implemented by the International Rescue Committee (“IRC”) from October 2022 through June 2025. SeaFreight Labs served as Project Advisor for the project and is the author.
An earlier SeaFreight Labs case study on humanitarian crowd-solving about our first 8 open-innovation challenges (from 2022) is HERE.
Contents
Project Overview
Strategy Highlights
Organization & Management Strategy
Problem Selection Strategy
Prize Strategy
Marketing Strategy
Judging Strategy
Metrics and Process
Duration
Registration Activity, Submission Timing and Yield
Management Time Required
Follow-on Activity
1. Project Overview
The International Rescue Committee published eight open-innovation challenges on the Innocentive platform between June 2023 and February 2025. All eight challenges awarded prize money to one or more solvers – a 100% success rate. A summary of the challenges is in Figure 1.

The first group of challenges contains projects (#1-4) that were looking for discrete solutions to long-standing IRC areas of focus. Project #1 was related to safety in refugee settlements. Project #2 dealt with improving the efficiency of humanitarian logistics. Project #3 came from a farming context but could apply to other domains as well. And Project #4 pertained to medical clinics.
The second group of challenges contains the challenges that were idea-seeking. These offered guaranteed prize money to solicit a wide range of ideas relating to the specific IRC topic.
The challenge summary is not in chronological order. Instead, it is in a combination of logical and chronological groupings. For example, “#1a. Female Latrines” was the first challenge launched by the IRC – in June 2023. This was an idea-seeking challenge that attracted great interest. The IRC was interested in digging deeper into one specific set of submissions, so they ran a follow-on challenge in February 2024 to solicit phosphorescent lighting prototypes. This follow-on is called “#1b” in this report.
A similar thing happened for “#3a. Fossil-Fuel-Free Irrigation”. In this case, the IRC needed help in validating an idea contained in one of its submissions so they ran a ‘mini-challenge’ to solicit vendors for a specific technology. This is “3b. Stirling Engine Vendor Search”.
Visit the IRC showcase (Figure 2) to read each IRC press release celebrating the winning solvers for each challenge. From each press release, you can hyperlink to the original challenge statement.

2. Strategy Highlights
2a. Organization & Management Strategy
The IRC had a 2-person team assigned to lead this effort. The senior person had HQ and field experience and a wide range of contacts throughout the organization. The junior person was relatively new to the IRC.
2b. Problem Selection Strategy
The IRC had no experience with crowd-solving prior to this project[1]. Therefore, the organization needed initial training on how it works and coaching on best practices to achieve success. The original contract was signed in late-October 2022. An in-person ‘meet-and-greet’ for the initial team members was held in mid-November 2022. An ‘intro-to-crowd-solving’ workshop was held in two 2-hour sessions in late November 2022. Then, the search for the initial problems began.
It is vital for overall project success that the first challenge launches smoothly and eventually awards a success prize. This objective caused the IRC team to reach out to colleagues all over the world in many different departments and functions. Many possible problems were considered until March 2023 before the team finally found a couple of problems with the right characteristics for effective crowd-solving[2]. Detailed challenge design took a few additional months, but the team was on its way to success.
I think the key action for the IRC project team was finding ways to engage with their field colleagues in project-leadership and technical-leadership roles to ask questions about day-to-day problems. Hearing their colleagues speak about their problems and listening with an open mind is what uncovered the issues that eventually became the challenge topics.
2c. Prize Strategy
The default prize amount for a generic humanitarian challenge is US$25,000. As shown in Figure 1, the IRC varied from this based on the complexity of the challenge, the value of a potential solution for the challenge, and the amount of work that would be required by a solver to submit a valuable solution. The main group of challenges (#1-4) ranged from US$25,000 to US$45,000 while the smaller and/or shorter challenges ranged from US$5,000 to US$15,000.
The Phosphorescent challenge (#1b) had a two-part prize strategy. It offered US$10,000 as a pool for finalists that were invited to submit working prototypes. It then had an additional prize pool of US$35,000 for the winners.
The Fossil-Fuel-Free Irrigation challenge (#3a) did not award its full prize pool because no submission fully met all the published requirements. The winner of this challenge won US$20,000.
2d. Marketing Strategy
See Figure 3 for some key information about the marketing strategy of the IRC.
The marketing of a challenge is critical to its ultimate success. The more people with different backgrounds and experiences that hear of the challenge, the greater likelihood that someone will have the unique skills and knowledge to submit a valuable solution.
The goal of every challenge was to:
1. Achieve deep penetration of the Innocentive crowd; and,
2. Engage with other crowds of relevant membership and meaningful size
Regarding the first goal, Innocentive used their weekly newsletter and other social media channels to highlight each challenge during the entire solicitation period.
Regarding the second goal, SeaFreight Labs helped to recruit marketing partners including Engineering for Change “E4C”, HeroX, MIT Solve and Make.com to greatly expand our reach. Read about a number of these partners at my blog post HERE and read about the diversity of Solvers at my blog post HERE.

A good measure of the effectiveness of outside marketing is the percentage of Solvers that were not members of the Innocentive crowd when the challenge began. Figure 4 shows this information for each IRC challenge. The most obvious way to see the impact of external marketing is to look at challenges 3b and 6. Their non-Innocentive-crowd participation was very low, largely because their short duration prevented IRC engagement with any external marketing partners.

2e. Judging Strategy
The judging process is the Seeker’s activity to winnow down all the submissions for a particular challenge to determine if any submissions are good enough to receive prize money (read blogs on humanitarian judging HERE and HERE). Figure 5 shows how many submissions remained under consideration at key milestones of the process.

Figure 3 contains some metrics about the IRC judging process. The process consisted of the following steps:
Round 1 – The IRC junior team member and the SeaFreight Labs Project Advisor read every submission and judged each submission using a set of evaluation criteria provided by the IRC challenge sponsor, and based on the requirements specified in the challenge definition. Reading each submission and its attachments took an average of 15 minutes per submission. We recommend reading the latest submissions first, as these are likely to be the most interesting and of the highest quality.After both people had read each submission, there was a meeting to decide which submissions to promote to the next round and which ones to reject.
Round 2 – This round typically included 2-5 IRC staff members with domain knowledge about the problem. Each judge used a new set of evaluation criteria with some of the same topics from Round 1, but often with a different emphasis and different weighting. A meeting was held at the end of this round to determine which submissions to move to the next round.
Round 3 – This round contains a smaller number of submissions so is easier to involve people outside of the core team. It may involve people from more senior roles in the organization and it may also include people from outside of the organization. In the IRC’s case, both paths were followed on some challenges. In some cases, no additional winnowing was necessary and the judging stopped at the end of this round.
Round 4 – This round, if it is necessary, can involve testing of prototypes or detailed analysis of a submission. It might also involve the original core team coming together again to discuss the input from prior judging rounds and to review all the information collected from additional interaction with remaining solvers. This round resulted in final decisions on winners and award amounts.
3. Metrics and Process
3a. Duration
A crowd-solving challenge consists of four major phases: design, solicitation, judging, and publicity of a winner (if one is awarded). These phases can vary widely in length depending on the skills and experience of the Seeker, the difficulty of the problem, the complexity of judging and mundane issues like personnel turnover or organizational turmoil.
In the case of the IRC, it took a while for the organization to find a ‘good’ problem. This explains the long lag between the contractual start of the project (22 October 2022) and the launch of the first two challenges in June and July 2023. See Figure 6.

It is also a good practice to run one’s first challenges from beginning to end before running additional challenges. Going through the entire process can educate the team on all the steps necessary for success and give the organization confidence in the entire process. The IRC followed this practice with their first two challenges.
Later challenges (3-6) launched in 2024 had shorter design phases because of the expertise and confidence earned on the first two challenges. See Figure 7.

The length of the judging phase depended on the complexity of the judging and whether outside judges were required to obtain needed expertise. For example, challenge 3b had a very short judging period because it was easy to differentiate between the best and worst submissions. Challenge 1b required extensive field testing of the prototypes so it required nine months to reach a final decision.
3b. Registration Activity, Submission Timing and Yield
Registration Activity
One of the ways to track the interest generated in an Innocentive challenge is to monitor how many people formally register interest in the challenge. Figure 8 shows the registration activity for the 8 IRC challenges plotted against time. The first IRC challenge run, Female Latrines, generated the highest number of registrants, at 208. It achieved this in only 49 days.

There is currently no specific benefit to a Solver for registering for a challenge. In the past, Innocentive held back some challenge information until a Solver had registered but this constraint was not in place during the time period that the IRC ran their challenges.
Submission Timing
No matter how effective your marketing, you are almost certain to receive almost half of your submissions in the last 3 days of the challenge. See Figure 9. It was our experience on all of the IRC challenges that the best submissions came in during the last 2-3 days. Often, the very best came in the last few hours.

Yield
We define ‘Yield’ in relation to the number of registrants for a challenge. ‘Submission Yield’ is the number of submissions to a challenge compared to the number of registrants. “Quality Yield” is the number of quality submissions compared to the number of registrants. A ‘quality’ submission is one that is complete and not obviously generated by AI. The data for the IRC challenges is below.
3c. Management Time Required
The IRC project team of two people were involved with the project from beginning to end over the period of 2+ years. It was not full-time for either person.
The Project Advisor from SeaFreight Labs spent about 400 hours supporting the IRC project. Figures 10 and 11 show the allocation of time by project and by task type for the Project Advisor role.
The IRC did not track the time of its people during the project.


4. Follow-on Activity
At the time of the writing of this report, a number of the completed challenges led to additional development effort to move the winning ideas forward. The specifics of the work are confidential to the IRC. However, I can report that field testing and/or further design and development is underway for:
· #1 – Lighting for female latrines
· #2 – Last mile packaging
· #3 – Fossil-fuel-free irrigation
Other follow-on work is in the planning stages.
Comments