The problem of peer review is at the center of how science is formalized and propagated. It affects knowledge generation via its impact on dissemination, credit assignment, promotions, funding etc. Yet scientists don’t view the review process as something that should be engineered. The match between reviewers and manuscripts has rarely been analyzed and has never been optimized to minimize bias and variance. Kording and his team will analyze a dataset of manuscripts, reviewers, and editor ratings for the over-8000 neuroscience papers submitted to a major journal (PLOS One). Their goal is to quantify bias and variance, offer solutions to how to minimize them, and to optimize the review process with respect to other criteria like research novelty and target audience. Ultimately, improving the process of reviewer assignment could make knowledge generation more efficient.
Optimizing scientific reviewer assignments
In This Section:
- Current Projects
- Aesthetics of Explanation
- Big Questions
- Cognitive and Evolutionary Foundations of Science
- Great Scientists
- Hidden Models
- Idea Generativity
- Levels of Description
- Lives of Concepts
- Machine Science
- Optimal Matching
- Peer Review
- Representations of Knowledge
- Schools of Thought
- Tradition and Innovation
- The Zeitgeist of Science
- Disambiguation Working Group
- Social Sciences Distinguished Lecture Series
- D.E.E.P: Discovering the Extent of Estimable Prediction in Science and Technology
- Social MIND project will build AI models to explain, predict and influence the social world
- New Research
- Our Funders
- Cloud Kotta
News
-
James Evans on Social Computing and Diversity by Design
March 15, 2021
-
New Postdoctoral Scholars
March 13, 2021
-
Journal of Social Computing Launch
December 17, 2020
-
Knowledge Lab team wins the IRIS Researcher Award
February 26, 2020
-
Analysis of Wikipedia finds politically polarized teams produce better work
March 4, 2019
Connect: