| Original language | English |
|---|---|
| Pages (from-to) | 468-469 |
| Number of pages | 2 |
| Journal | Nature Genetics |
| Volume | 45 |
| Issue number | 5 |
| DOIs | |
| State | Published - May 2013 |
| Externally published | Yes |
Access to Document
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
In: Nature Genetics, Vol. 45, No. 5, 05.2013, p. 468-469.
Research output: Contribution to journal › Letter › peer-review
TY - JOUR
T1 - Crowdsourcing genetic prediction of clinical utility in the Rheumatoid Arthritis Responder Challenge
AU - Plenge, Robert M.
AU - Greenberg, Jeffrey D.
AU - Mangravite, Lara M.
AU - Derry, Jonathan M.
AU - Stahl, Eli A.
AU - Coenen, Marieke J.
AU - Barton, Anne
AU - Padyukov, Leonid
AU - Klareskog, Lars
AU - Gregersen, Peter K.
AU - Mariette, Xavier
AU - Moreland, Larry W.
AU - Bridges, S. Louis
AU - De Vries, Niek
AU - Huizinga, Tom W.
AU - Guchelaar, Henk Jan
AU - Friend, Stephen H.
AU - Stolovitzky, Gustavo
N1 - Funding Information: a public-private partnership between the Consortium of Rheumatology Researchers of North America, Inc. (CORRONA) and the Pharmacogenomics Research Network (PGRN) sponsored by the National Institute of General Medical Sciences (NIGMS) and the US National Institutes of Health (NIH). A unique component of our Rheumatoid Arthritis Responder Challenge is the diversity of participation across a number of groups from academic institutions, private foundations and for-profit companies. In addition to support from CORRONA and PGRN, we received funding from pharmaceutical companies (see complete list on our website; link below) and a private foundation (the Arthritis Foundation) to support the public commons. We also received support from the Arthritis Internet Registry (AIR) and the Broad Institute to generate new genomic data sets, as well as in-kind support from a large number of academic collaborators from across the world to make GWAS data available in the discovery phase. We anticipate that a winning classifier could enable a follow-on prospective clinical trial within the group of appropriately consented patients in AIR. Through Synapse, analysts who are inclined to establish collaborations will have the opportunity to see in real time the models that others are using so that each team can learn from the others (Fig. 1). A leaderboard will show the relative performance ranking of the different teams on the basis of a crossvalidation strategy designed to minimize overfitting. During the discovery phase, teams that choose to collaborate with each other will have the opportunity to check each other’s algorithms for readability, speed and reproducibility. Then, during the validation phase, each team will submit computer code, which the Sage-DREAM team (http://www. sagebase.org/) will test in Synapse to establish whether it runs as expected to predict if a subject is an anti-TNF therapy responder or nonresponder on the basis of the GWAS data. Predefined performance metrics will be used to objectively determine the accuracy of the predictions, their statistical significance and the final performance ranking of the participating teams. The team that develops the most highly predictive model will be deemed the ‘winner’, with precise attribution of contributor roles going to all members of teams that contributed to building the final consensus model. The best-performing models, therefore, will have passed a test of performance that is outside the realm of, and complements, traditional peer review. Indeed, this stringent test of method performance can be used as an enhanced way of publication vetting, what we call ‘challenge-assisted peer review’. Traditional peer review is essential for ensuring the clarity, originality, contextualization and logical thread of a discrete set of work that is ready to be used by researchers in the form of a published article. However, the complexity of working with omics data—entailing multiple analytical decisions, computational simulations and statistical calculations— means that referees are challenged to follow and check the components of even a traditional research paper. In our Rheumatoid Arthritis Responder Challenge, we will explore the feasibility of enhancing the reliability and transparency of conventional peer review in partnership with Nature Genetics. This can be achieved if the referees and authors of the paper reporting on the best-performing methods in the challenge are willing to leave their comments openly (yet anonymously) on the Synapse platform (Fig. 1). We anticipate that the challenge-based assessment of accuracy will provide an objective metric of performance and a comparison with state-of-the-art analytical methodologies that will greatly enhance the task of refereeing a body of work with more quality control than is currently provided by conventional peer review. In conclusion, we believe that the Rheumatoid Arthritis Responder Challenge is an apt use of crowdsourcing in human genetics to gain insight into clinical prediction and disease biology. Details of the challenge, including the rules by which the models will be judged, can be found at https://synapse.prod.sagebase. org/#!Synapse:syn1734172.
PY - 2013/5
Y1 - 2013/5
UR - https://www.scopus.com/pages/publications/84878568206
U2 - 10.1038/ng.2623
DO - 10.1038/ng.2623
M3 - Letter
C2 - 23619782
AN - SCOPUS:84878568206
SN - 1061-4036
VL - 45
SP - 468
EP - 469
JO - Nature Genetics
JF - Nature Genetics
IS - 5
ER -