Reporting and Interpretation of Randomized Controlled Trials With Statistically Nonsignificant Results for Primary Outcomes

Boutron I, Dutton S, Ravaud P, Altman DG. JAMA. 2010;;303:2058-2064

 

Summary

Information can be distorted to impress readers that something is noteworthy. "Spin" is a term that is used to describe such distortions. The authors' objective was to identify the nature and frequency of spin in published reports of randomized controlled trials (RCTs) with statistically nonsignificant results for primary outcomes. From 616 reports of RCTs published in December 2006, 72 were included in this analysis. The title was reported with spin in 13 articles (18.0%). Spin was identified in the Results and Conclusions sections of the abstracts of 27 (37.5%) and 42 (58.3%) reports, respectively, with the conclusions of 17 (23.6%) focusing only on treatment effectiveness. Spin was identified in the main-text Results, Discussion, and Conclusions sections of 21 (29.2%), 31 (43.1%), and 36 (50.0%) reports, respectively. More than 40% of the reports had spin in at least 2 of these sections in the main text. The conclusion was that in this representative sample of RCTs published in 2006 with statistically nonsignificant primary outcomes, the reporting and interpretation of findings were frequently inconsistent with the results.

 

Viewpoint

Boutron and colleagues have quantified the spin found in certain research reports whose primary outcomes were statistically nonsignificant. Although the authors acknowledged that results of a trial could affect how fast results are published, and in what type of journal, another issue that deserves attention is the perception by many authors that "negative results" are not worth publishing. Aside from deliberate spin to garner a financial and/or competitive advantage, there is the pragmatic reason of trying to "sell" the abstract and paper to journal editors and their reviewers. This desire for a "good story" may be observed among inexperienced or naive authors, editors, and reviewers, and in some cases may be a product of local or national culture. It would be of interest to examine in what types of journals spin is more frequently encountered. For example, is this more frequently encountered in low-impact vs high-impact factor journals, publications with national vs international readerships, or general vs specialty journals?

A bibliometric analysis of how often spun reports are cited compared with other types of articles would also be of interest. Supplements to journals should also be examined (see Citrome L. Citability of original research and reviews in journals and their sponsored supplements. PLoS One. 2010;5:e98). Spin may be an effective strategy in garnering additional citations, be they favorable or critical. This in turn can actually increase a journal's impact factor and thus encourage editors and publishers to allow this practice to continue.