This community is in archive. Visit community.xprize.org for the current XPRIZE Community.

What should the cohort size be?

RoeyRoey Posts: 160 XPRIZE
edited March 2020 in Prize Design
We're thinking of challenging teams competing in a Rapid-Response Workforce XPRIZE to develop and deploy a scalable solution to train 100 low-skilled individuals in 100 days for no more than $100 each.

Is 100 the right cohort size?

Comments

  • NickOttensNickOttens Posts: 899 admin
    @mescobar, @shurder, @dshap54, what is your opinion?

    Do you think successfully upskilling 100 workers would prove that the solution is scalable? Should it be more, or can it be fewer?
  • dshap54dshap54 Posts: 4
    To me, the question of scale is across at least two dimensions. If you're trying to prove you can do something at high volume in one industry and one community, that's scale and it may/may not transfer. The other is that you've got something you think works across communities, industries, demographics of employees. Would have to be designed differently for those two kinds of solutions. Both are great and just different.
  • NickOttensNickOttens Posts: 899 admin
    In the latest version of our prize design, we are considering challenging teams to recruit and retrain at least 1,000 workers in 100 days.

    Teams competing in the prize competition could chose the occupation they're reskilling workers for.

    Do you think the 1,000 trainees goal is audacious enough, too audacious, or just right?
  • NickOttensNickOttens Posts: 899 admin
    @TiffanyEm, @skalloch, @Ashleykae, @dblakels, do you think these numbers strike the right balance between audacious and achievable?

    We want an XPRIZE to set an ambitious goal that is hard to achieve in order to move the needle, but we also want teams to have a reasonable shot at winning the competition. We'd value your opinion on this!
  • SDhillonSDhillon Posts: 7 ✭✭
    @NickOttens Hi, Nick. 1000 may be too audacious if you are talking about doing this during the Pandemic where gatherings are required to be essential.

    As an addition to the Pandemic Alliance, I can see this as a great add-on to recruit teams for mid to long term pandemic projects.

    My opinion is that allowing the screening on educational attainment and income at present, will allow teams, to pigeonhole their participants into the easiest categories for them to work with. For the program to work, you need to use those candidates that cross the boundaries of income and education. By not allowing the screening, you could administer a test to gauge the ability of the participant to learn, and at the same time, be able to cast a much wider net, that would cover more of the target population groups.
  • Ed_LarsonEd_Larson Posts: 1 ✭✭
    Hey Nick, I don't think that 1000 trainees is too audacious in the second phase of the competition, since we are most likely talking about an online training program. I would suggest though that a raw number does not necessarily indicate the level of success of the training. There should probably be some metric in the scoring of the percentage of successful completion, percentage who found a job in the required discipline, and then of course the length of time on the job.

    I like the way that the adult literacy XPRIZE was done. Applicants signed up and then were randomly assigned to an app. That way all the teams are pulling from the same pool of applicants and there's a better chance of an equal opportunity for success (or failure).
  • NickOttensNickOttens Posts: 899 admin
    Thank you both for your comments, @SDhillon and @Ed_Larson! We're always especially glad to hear from alumni of previous XPRIZEs.

    I don't think randomly assigning competing teams trainees would work here, since we're thinking of allowing teams to retrain and upskill workers in an occupation of their choice. But maybe that does get us thinking about what else we can do to ensure fairness.

    (I switched your comments to this discussion, by the way, to have everything related to the cohort size in the same place.)
  • NickOttensNickOttens Posts: 899 admin
    @Matthew_Poland, @Diane_Tavenner, do you have any thoughts on whether retraining 1,000 workers in 100 days is too many, too few, or just about right?
  • Matthew_PolandMatthew_Poland Posts: 4
    Hi @NickOttens - I agree with the comments above that the raw number who complete a training may be less important than metrics that indicate success for trainees afterward, like finding a job in the industry. Another success metric could be related to an increase in wages as was done with JFF Labs' $1 Billion Wage Gain Challenge

    It may make sense to focus on developing remote work for these individuals too given that "in-person" jobs are plummeting right now and aren't likely to come back anytime soon.
  • Diane_TavennerDiane_Tavenner Posts: 2
    Perhaps the way to get to the right number is by asking:

    For who
    Under what circumstances
    To what end

    As others have said, the measures of success, which will define to what end, will matter. If you can train 100 people to get a better paying job or a better prospect job in 100 days, I would say that is just as meaningful to do so for 100 as doing so for 1,000.

    Under what circumstances matters to me on this question because there are at least two considerations -- what does it take to recruit 1,000 vs 100 people, and what does it take to train the different group sizes?

    Given that the who is low-skilled workers, which generally correlates with folks with struggle to build new skills, then again, the different group size will drive the solutions. It is possible to create a concierge style learning model for 100, but not for 1,000. So if you are looking for proof of scalability, I would say you would want 1,000 to drive the solution set.
  • HeatherSuttonHeatherSutton Posts: 77 XPRIZE

    Given that the who is low-skilled workers, which generally correlates with folks with struggle to build new skills, then again, the different group size will drive the solutions. It is possible to create a concierge style learning model for 100, but not for 1,000. So if you are looking for proof of scalability, I would say you would want 1,000 to drive the solution set.

    @Diane_Tavenner - You nailed it! True that it would not be a challenge to create a concierge-style model for such a small cohort and that if we want to think scale, we should think beyond such a model. Thank you so much!

    And big thank you to the rest of the commenters too!
  • feskafifeskafi Posts: 8
    I definitely think 1,000 is an audacious goal. But it's something we should thrive for. This is particularly a great time to do so.

    I do believe that it should be done in phases though and not all at once (i.e.: 100, 500, 1,000). Frankly, maybe in the final phase, we can go to 10,000 instead of 1,000.

    I suggest a process similar (but not identical) to a clinical trial. Even if you believe your drug will cure cancer, you don't give it to all patients at once. You control the process, you test, iterate and learn. In our case, we are asking a certain # of trainees to trust and have faith in us that they will get a better job in 100 days or so. There is a good chance that the experiement might fail. It's better to fail on 100 people instead of 1,000 people.
  • NickOttensNickOttens Posts: 899 admin
    @erdavis910, @mannyluong, I'd like to ask your insight on this question as well.

    Is challenging teams competing in a future of work prize to retrain and up-skill 1,000 workers both fair and audacious? Would it be enough for proof of concept?
  • NickOttensNickOttens Posts: 899 admin
    Thank you all for your feedback here!

    Based on your comments, as well as our team's research on current market offerings (like online coding bootcamps), we're planning to reduce the cohort size to 500.
  • boblf029boblf029 Posts: 35 ✭✭
    Assuming I have all the facts about the program, I think the program needs to be rethought from the ground up. The classic work on this kind of project is Donald Campbell's 'Reforms as Experiments" written about fifty or sixty years ago. The basic idea is that you design the full scale project based on a pilot project large enough to debug it and get outcome data that is statistically valid. Now that means you need to know something about your outcome variables. If you expect the outcomes to be dramatically different for the experimental treatment group then you need only a small N of cases. Thirty people may be enough especially in a simple before after design where the before is the control group. But if the variables are not expected to be changed much by the treatment you need a much larger N of cases. Five hundred may be a good number in that case. But then you confront another problem:.what is the external validity of your study. Let us suppose that the trainees are people who have been making toys for children. The toys they have made are beloved by kids but the manufacturer has decided to make the toys in a low cost country. You now have people who have some skills but they are not transferable to another industry. They are only good for toy making and it is no longer the intent of the company to make toys in America. My answer is that we can go two ways. One way is to say this is not right. We need to make toys in America just as the president has decided we need to make steel in America. I actually like this solution. The other is to figure out a product that we can make in the United States that utilizes the skills of the people who are being laid off. Maybe the toys being made are a type of doll. Perhaps, the answer is to stop making this type of doll for a mass audience and instead make a doll for the affluent, a kind of collectible. I think then you can talk about a retraining because you are not really doing something too extremely new. I doubt, for example, we could train school social workers to be dental hygienists, But going from making a toy for a large market of low or middle income consumers to a toy for the affluent that is more collectible than a thing to play with is doable. On the other hand, if you are thinking that you will want to use the insights gained from this experiment for a situation where you are training people in how to make electronic toys such as little robots instead of dolls, then maybe you are in a real bind. An electronic toy, regardless of whether it is a robot Barbie or a machine designed to help people clean their homes who are handicapped is something that requires a very different set of skills than a toy like a Barbie doll. And I do not think the external validity of a retraining from Babie to bisque dolls that are collectible is much value to training a workforce of robot makers. You need too much background in electronics such as automation etc. Maybe people who were trained on slide rules can be retrained to use computers but that is not the same thing. Good luck in any case!
  • vhamiltonvhamilton Posts: 2
    Hi. Yes 500 or 1000 is an audacious goal, which I applaud, but it severely limits where the respondents are. Although the idea that the training is remote is ok, health care workers, for example, need to live someplace where they can get to work. So the high threshold cuts out all rural areas of the country. Unless the expectation is that whoever applies is a national organization and the 500 is across the country. But then you run into the issue that most effective job placement comes out of a deep set of relationships with businesses in a region. Maybe this would be an appropriate challenge for a Governor somewhere, who could muster a set of businesses willing to hire from the start?
Sign In or Register to comment.