The Atlantic recently posted a fascinating article looking at the problem of reproducibility in the field of psychology: Online Bettors Can Sniff Out Weak Psychology Studies
So why can't the journals that publish them? Ed Yong breaks down
the new results from the Social Sciences Replication Project, in which 24 researchers attempted to replicate social-science studies published between 2010 and 2015 in Nature and Science—the world’s top two scientific journals. The replicators ran much bigger versions of the original studies, recruiting around five times as many volunteers as before. They did all their work in the open, and ran their plans past the teams behind the original experiments. And ultimately, they could only reproduce the results of 13 out of 21 studies—62 percent.
I've seen lots of dismal numbers like that before in other fields, but what I found fascinating was that in addition to their attempts to reproduce these studies, the SSRP team also ran "a "prediction market", a stock exchange in which volunteers could buy or sell “shares” in the 21 studies, based on how reproducible they seemed." And it turns out these 206 volunteers predicted overall that the 21 studies would replicate 63% of the time!
That's just freaky-close! I want to be in on one of these prediction markets, and I want to see how these would play out in other academic disciplines. Off to learn more!