Re-posted From: National Cancer Institute Blog Wednesday May 27th 2015 by Richard P. Moser, PhD, and Kisha I. Coa, PhD
“To historians looking back a hundred years from now, there will be two eras of science: pre-network science, and networked science…Over the next couple of years we have an astonishing opportunity to change and improve the way science is done.” — Michael Nielsen, Reinventing Discovery
Use of the internet is ubiquitous for most people as a way of learning, connecting, and sharing information. Likewise, scientists have taken advantage of collaborative web technology to accelerate discovery in a new online participatory environment, a phenomenon referred to as Science 2.0. This builds off the idea of Web 2.0— defined by technologies such as wikis, blogs and other means for sharing information and collaborating with other users. Science 2.0 refers to the application of these technologies to the scientific arena. Internet-based collaborative research spaces have transformed how research is accomplished and have directly facilitated team science. As Michael Nielsen (2012) so astutely claimed, we are now in the era of networked science, the participative web, with all the opportunities that it holds for advancing team science.
Within this participative web, the use of crowdsourcing, a method for obtaining input by soliciting contributions from a large group of diverse stakeholders, oftentimes accomplished on the internet, has become one important tool to accomplish tasks or resolve difficult problems. Although crowdsourcing is not a panacea, it has been shown to be an effective tool to address problems under the right circumstances. These circumstances include independent opinions from a diverse group of contributors, the ability to aggregate results, and decentralized decision making (Surowiecki, 2005). Within science, there are recent examples of successful use of crowdsourcing to solve challenging scientific problems, including understanding protein structures for amino acids and classifying galaxies to understand how they form over time (www.galaxyzoo.org).
With this model in mind, the National Cancer Institute’s Behavioral Research Program developed Grid-Enabled Measures (GEM; https://www.gem-measures.org), an online tool that runs on a wiki platform and uses crowdsourcing to address particularly vexing issues in health research, especially within the behavioral research arena (Moser et al., 2011). These issues include the lack of an agreed upon ontology, the under-utilization of common measures for prospective research studies, and a resulting dearth of data that are harmonized – defined as data that have a common structure, including comparable measures – which can be shared, integrated, and analyzed as a whole. The inability to integrate data is one major reason why scientific discovery has been impeded, as there is no cumulative knowledge base being created (Curran & Hussong., 2009).
In response to user needs, GEM has developed workspaces that are virtual places to collaborate with others to gain consensus on the ‘best’ measures of specific constructs (e.g., depression, anxiety, quality of life) to use in prospective data collection. Publicly available for anyone to use, these workspaces provide collaborative tools and a forum to discuss issues related to data harmonization efforts, in which colleague can be invited to participate, thus creating a virtual community of researchers.
GEM workspaces are available for anyone to access and use, and the tool has been utilized successfully with several investigator-initiated collaborative projects. These include measurement consensus efforts on psychosocial measures for electronic health records (Estabrooks et al., 2012), survivorship care planning (Parry et al., 2 015) and dissemination and implementation research (Rabin et al., 2012). Though GEM tends to have a focus on self-report measures of behavioral and social science outcomes, such as physical activity and smoking, it also includes anthropometric measures (e.g., waist circumference) and measures related to diverse content areas such as diabetes, sexual behavior, substance abuse, and cardiovascular disease.
GEM users can provide feedback in the form of ratings and comments, and participate in on-line discussions, and these data can be compiled and given to users in the form of reports that include both quantitative data (e.g., mean ratings of measures; standard deviations) and qualitative data (i.e., comments, discussion threads), to help drive consensus on best measures. The crowdsourcing aspect enables input from a wide range of stakeholders, who are typically researchers and clinicians, but can include any stakeholders.
This process can be seen as a ‘bottom-up’ approach to gaining consensus on measures that is different from other efforts that use more of a ‘top-down’, expert-driven approach to achieve consensus. The wiki aspect of the tool encourages collaboration, as users enter (and potentially edit) information about the existing constructs and measures, including definitions and other meta-data. Over time, as users contribute to GEM, there will be inevitable tension as scientists grapple with different ways to define important constructs, and provide feedback on related measures. But science ultimately will advance through these efforts toward understanding and resolving differences.
We encourage anyone who wants to learn about measures related to their research interests, or who wants to contribute feedback about constructs and measures, to use GEM. We also welcome inquiries from any individual or group that would like to leverage GEM workspaces as a tool to gain consensus on best measures to use in prospective data collection. Please contact us for more information, and visit GEM at:https://www.gem-measures.org.
References
- Curran, P. J. & Hussong, A. M. (2009). Integrative data analysis: The simultaneous analysis of multiple data sets. Psychological Methods, 14, 81-100.
- Estabrooks, P. A., Boyle, M.., Emmons, K. M., Glasgow, R. E., Hesse, B. W., Kaplan, R. M., Krist, A. H., Moser, R. P., & Taylor, M. V. (2012). Harmonized patient-reported data elements in the electronic health record: supporting meaningful use by primary care action on health behaviors and key psychosocial factors. Journal of the American Medical Informatics Association, 19, 575-582.
- Hesse, B. W. (2008). Of mice and mentors: developing cyber-infrastructure to support transdisciplinary scientific collaboration. American Journal of Preventive Medicine, 35 (2 Suppl). PMID: 18619404.
- Moser, R. P., Hesse, B. W., Shaikh, A. R., Courtney, P., Morgan, G., Augustson, E., Kobrin, S., Levin, K. Y., Helba, C., Garner, D., Dunn, M., & Koa, K. (2011). Grid-Enabled Measures: Using Science 2.0 to standardize measures and share data. American Journal of Preventive Medicine, 40, Supplement 2, S134-S143.
- Nielsen, M. (2012). Reinventing discovery: the new era of networked science. Princeton: Princeton University Press.
- Parry, C., Beckjord, E., Moser, R. P., Vieux, S., Padgett, L. S., & Hesse, B. W. It takes a (virtual) village: crowdsourcing measurement consensus to advance survivorship care planning. Translational Behavioral Medicine, 5, 53-59.
- Rabin, B. A., Purcell, P., Naveed, S., Moser, R. P., Henton, M. D., Proctor, E. K., Brownson, R. C. & Glasgow, R. E. (2012). Advancing the application, quality and harmonization of implementation science measures. Implementation Science, 7, 119.
- Surowiecki, J. (2005). The wisdom of crowds. New York: Random House.