If you are evaluating a fusion program, a good question to ask is “does this concept use a steam turbine?”— Eli Dourado (@elidourado) December 12, 2022
Why? It comes down to the cost of steam.
Does scientific peer review work? The answer seems to be no - or at least not enough to justify the cost of 15,000-reviewer-years of time each year to review papers where only a fraction of errors are caught and rejected papers can be shopped around.
His analysis around number of errors caught felt a little weak to me. Although only 25-30% of major errors were detected, the obviously flawed papers were recommended for rejection most of the time (about 60-70% according to the linked studies). I see some parallels to code review, where I might recommend changes on a Pull Request early if I notice glaring structural issues.
I liked his analysis of “weak-link” vs “strong-link” and why peer review is fundamentally flawed. In a “weak-link” domain, progress depends on the worst work, and peer review makes sense - we should try to weed out bad science. But I’m inclined to agree with him that science is a “strong-link” domain, where bad ideas will eventually flame out while good ideas move us forward.