BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Do Crowdsourced Predictions Show The Wisdom Of Humans?

This article is more than 3 years old.

In 1907, Sir Francis Galton observed a competition in England where 800 entrants guessed the weight of a dead ox. While many of the guesses were far off, Dalton found that the median guess of 1207 pounds proved to be only nine pounds, or 0.8 %, off the true weight of 1198 lbs. Thus, Dalton concluded in a March 7, 1907, article published in Nature that “this result is, I think, more creditable to the trustworthiness of a democratic judgment than might have been expected.”

Thus, since at least 1907, it has been known there is, an almost paradoxical, hidden truth in the judgment of the crowd. This remarkable insight has been repeated over and over. In fact, in 2001, the Intelligence Advanced Research Projects Activity (IARPA) launched its own competition to identify bleeding-edge methods to forecast geopolitical events. After four years, the Good Judgment Project won the competition, and allegedly predicted certain events with more accuracy than intelligence analysts with access to classified information. The Good Judgment Project made its prediction by aggregating guesses of non-experts.

Recently, Metaculus, whose co-founders include physicists from Yale and the University of California, Santa Cruz, has been predicting COVID-19-related events and milestones with surprising accuracy. According to Gaia Dempsey, who leads product and partnerships for Metaculus, it too aggregates the predictions of thousands of site users, which range from prediction experts to ordinary citizens. One of the keys, claims Gaia, is that the algorithms aggregating the predictions take into account the past accuracy of a particular user’s predictions. Some users are very good at predicting; others are still learning. Forecasts made by predictors with a track record of prescience are weighted much more highly. Remarkably, and as is known in the prediction community, predictors actually improve with practice and feedback, suggesting that prediction is a discrete skill that can be learned.

The results tell the rest of the story. A hospital in El Paso recently partnered with the Metaculus Pandemic Project, hoping to find answers to critical questions related to peak infections of COVID-19 so they could plan for resource allocation. Dr. Richard Lange, the president of Texas Tech University Health Sciences Center El Paso, remarked that the results were “magic.” The platform accurately predicted, among other things, the day of the first peak in infections in El Paso as April 26—and it was off by only a day, when most models created by experts predicted Texas infections to peak in May, according to Dr. Lange. Also, the model predicted there would be 38 patients in the ICU that day, and the hospital recorded 35. The El Paso hospital used this information, together with other data, when considering whether to rent additional temporary spaces to house more patients, and to forecast how many other resources like ventilators and PPE would be required. Many resources were saved using these predictions.

Given the unprecedented accuracy of these models, the remaining questions involve how much legal and institutional resistance they will face. This is familiar territory if one recalls the tortuous path that paradigm shifts in data analysis—such as AI—have had to acceptance. On the question of liability for reliance on the predictions, essentially, public health officials should only use these models as information along with other available data for resource allocation—rather than blindly relying directly on the accuracy of the numbers. This is similar to the strict legal hurdles that exist for an artificially intelligent robots’ ability to virtually diagnose a patient. To pass regulatory scrutiny, a licensed doctor must be able to independently check and understand the basis for the AI program’s recommendation, and make the final decision. In what appears to be progress on this front, Metaculus’ predictions are being sent to the CDC on a weekly basis in conjunction with its partnership with Reich Lab at the University of Massachusetts Amherst.

At times of such great uncertainty as we now face, we are bombarded daily with countless predictions and prophecies about the pandemic curve and the virus’s disappearance or permanent entrenchment. Given the power of platforms like Metaculus, and wide divergence among the countless predictions relating to COVID-19 infections by singular experts we see change on a daily basis, perhaps we should take more stock in the wisdom of human intelligence—when properly put into context of the masses. Galton began his Nature article with the eerily now-pertinent observation that “[i]n these democratic days, any investigation to the trustworthiness and peculiarities of popular judgments is of interest.” Everyone has a theory or belief as to the course of the virus and the damage it will leave in its wake; but maybe the truth is actually somewhere in the middle. Of course, only time will tell. Or as Sir Francis Galton mused, perhaps these predictions will slightly restore our faith in democratic judgment.

Follow me on LinkedIn