FFCWS just recently announced an exciting new project, the Fragile Families Challenge. The Challenge is a mass collaboration combining predictive modeling, qualitative interviews, and causal inference to improve the lives of disadvantaged children in the US. We will give Challenge participants data collected on our focal children from birth through age 9, as well as a training set of child outcomes measured at age 15. The challenge is to use the training data to predict specific outcomes at age 15.
Ian Lundberg, who is working on the Challenge and is a graduate student in the Princeton University Office of Population Research, answered a few questions for us to explain more about the project.
Q. What types of questions does the FF Challenge hope to answer?
The Fragile Families Challenge is our attempt to create a new way of doing social research, one that is much more open to the talents and efforts of everyone. In concrete terms, the central goal is to use machine learning methods for mass collaboration in the service of qualitative interviews. We expect that by combining ideas from social science and data science, we can—together—help address important scientific and social problems. And, we expect that through a mass collaboration we will accomplish things that none of us could accomplish individually. This project will demonstrate a new way of doing scientific research.
In addition, we have three concrete goals:
Discover unmeasured factors: Despite coming from disadvantaged backgrounds, some kids manage to “beat the odds” and achieve unexpectedly positive outcomes. What unmeasured predictors are associated with “beating the odds”? By combining predictions for how well children will be doing at age 15 based on all measured variables, we can identify children doing much better or much worse than expected. By interviewing these children, we will discover new factors that no social scientist has thought of which are associated with child success. You can read more about this goal here.
Prepare for causal inference: Social scientists and policymakers often wish to use empirical data to infer the causal effect of a binary treatment on an outcome. For instance, we would like to know whether being evicted from one’s home in adolescence harms outcomes in early adulthood. In the absence of a randomized experiment, the validity of causal claims depends on strong and untestable identification assumptions as well as researcher modeling decisions for estimation. By collaborating to produce a single community model for the probability of eviction, we will remove modeling decisions from any individual researcher. Further, we will interview targeted children to evaluate the credibility of the identification assumptions. In the end, the Fragile Families Challenge will produce a set of community-generated propensity scores for three binary “treatments” at age 15. After evaluating the credibility of the identification strategy in interviews, we will use these propensity scores to assess the effect of these treatments on outcomes to be measured several years from now, when children are approximately 22 years old. You can read more about this goal here.
Compare modeling approaches: During the Fragile Families Challenge, researchers will use a variety of different modeling approaches, and we plan to explicitly compare these strategies in terms of their informativeness and predictive performance in order to assess the trade-offs between these two styles of modeling in a specific empirical context. It is our hope that this comparison will lead to insights about which ideas from machine learning can be fruitfully applied to social science problems where there are thousands—rather than millions—of observations.
Q. Why is the FF data ideal for the Challenge?
The existence of data collected at age 15 but not yet made available to the public is what makes this project possible. It is easy to fit a complex model that performs well on the sample used to fit the model. The bigger challenge is to produce a model that predicts the outcome well in an entirely new set of observations. By keeping half of the observations at age 15 hidden from participants, we can directly compare submissions based on their performance in this “test set.” This would not be possible in other datasets where the outcomes have already been made public. We see the Fragile Families Challenge as a test case for an approach that could be used in many panel datasets in the future.
The FF data are also ideal because they include a remarkably rich set of variables capturing life experiences in a variety of domains throughout the entire course of childhood. Policymakers and researchers may be especially interested in understanding how some children “beat the odds” net of such a wide array of measured background factors.
Finally, improving the lives of urban youth is a broadly shared policy goal. The FF data are particularly well suited to this goal because the sample design selected births in large U.S. cities with an oversample of non-marital births. We believe that conclusions from this sample will generate new knowledge that can lead to important policy improvements to better serve this population.
Q. How do you “win” the Challenge?
No individual “wins” the Fragile Families Challenge; we all win by collaborating in a new way to expand scientific knowledge and improve the lives of disadvantaged children. The goal of the Challenge is to create one community model that combines the best of all the individual submissions. However, to encourage high-quality submissions, we will evaluate submissions based on mean squared prediction error in the held-out test set. In other words, your score will be better if your submission does a good job at predicting outcomes for cases where you have seen the background variables but have not yet seen the outcome. The leaderboard on our submission site will give you instant feedback on your performance in this metric. You can learn more here.
Q. Who can participate?
Anyone can apply to participate! We think some of the best ideas will come from unexpected places. Tell us a bit about your background on our application form, and we’ll get back to you with more information.
Q. What will happen with the models and resulting information that are produced after the Challenge ends?
When the Challenge ends, all models and resulting information will be open sourced so that anyone can use them in scientific research. Anyone who does not want their submission to be open sourced can email us at any time to delete their submission. However, we expect submissions to lead to novel ideas that could be studied in many scientific papers to come. In fact, the entire project is open sourced, to make it easier for others to create similar collaborations in the future.
In addition, we expect that the community predictions will be useful to Fragile Families researchers in general. For instance, the community-generated propensity scores could be used to study the effect of our three binary measured (eviction, layoff of a caregiver, and job training for a caregiver) on many outcomes to be measured when children are 22 years old. This will become a high-quality resource available to the entire community.
Q. Is there anything else you would like us to know about the Challenge?
The success of the Fragile Families Challenge depends on the participation of a wide variety of researchers who each bring something unique to bear on the question. This project will involve hundreds of collaborators, and we need you to join our team. Apply to participate now!
If you teach a social science, data science, or statistics class, we’d love for you to assign this in your class. If you lead a department, research group, or other group of people who might be interested, we’d also be glad to host a data jam with you. In either case, or if you have any other questions about the project, please email us at email@example.com. We’d love to hear from you!
We are grateful to the Russell Sage Foundation for funding this project. Our work would not have been possible without several open source software packages, notably Codalab and R. Finally, we are grateful to our Board of Advisers for useful feedback.