Describe a situation in which a new clinical practice was put into   place. Was there a DNP-prepared nurse leading the translation of the   practice from research to practice? If so, describe the

HASTINGS CENTER REPORT 39 34. Rosa, Harris, and Jayson, “The Best Guess Approach to Phase I Trial Design.” 35. G. Suntharalingam et al., “Cytokine Storm in a Phase 1 Trial of the Anti-CD28 Monoclonal Antibody TGN1412,” New England Journal of Medicine 355 (2006):

1018-28. 36. C. K. Daugherty et al., “Quantitative Analysis of Ethical Issues in Phase I Trials: A Survey Interview of 144 Advanced Cancer Patients,” IRB: Ethics & Human Research 22, no. 3 (2000): 6-14. 37. Chalmers et al., “How to Increase Value and Reduce Waste When Research Priorities Are Set.” 38. A. J. London, J. Kimmelman, and B.

Carlisle, “Rethinking Research Ethics: The Case of Postmarketing Trials,” Science 336 (6081): 544-45. 39. Food and Drug Administration, Food and Drug Administration Amendments Act (FDAAA) of 2007, Pub. L. 110-85, Title VIII, Section 801.

40. L. H. Camacho et al., “Presentation and Subsequent Publication Rates of Phase I Oncology Clinical Trials,” Cancer 104, no. 7 (2005): 1497-504; R. T. Hoeg et al., “Publication Outcomes of Phase II Oncol- ogy Clinical Trials,” American Journal of Clinical Oncology 32, no. 3 (2009): 253-57; R. W. Scherer, P. Langenberg, and E. von Elm, “Full Publication of Results Initially Presented in Abstracts,” Cochrane Database of Systematic Reviews 2 (2007): MR000005; A. T. Turer et al., “Publication or Presenta- tion of Results from Multicenter Clinical Trials: Evidence from an Academic Medical Center,” American Heart Journal 153, no. 4 2007): 674-80. 41. G. A. Freeman and J. Kimmelman, “Publication and Reporting Conduct for Pharmacodynamic Analyses of Tumor Tissue in Early Phase Oncology Trials,” Clinical Cancer Research 18, no. 23 (2012):

6478-84.

42. S. Mathieu et al., “Comparison of Registered and Published Primary Out- comes in Randomized Controlled Trials,” Journal of the American Medical Association 302 (2009): 977-84. 43. S. Niraula, “The Price We Pay for Progress: A Meta-analysis of Harms of Newly Approved Anticancer Drugs,” Jour- nal of Clinical Oncology 30, no. 24 (2012):

3012-19. 44. S. A. Greenberg, “How Citation Distortions Create Unfounded Authority:

Analysis of a Citation Network,” BMJ 339 (2009): b2680. 45. A. J. London, “A Non-Paternalistic Model of Research Ethics and Oversight:

Assessing the Benefits of Prospective Re- view,” Journal of Law, Medicine, and Ethics 40, no. 4 (2012): 930-44. Another Voice Translational Research May Be Most Successful When It Fails by john p. a. ioannidis I n this issue of the Hastings Center Report, Jonathan Kimmelman and Alex London argue that in assessing the success of clinical translation, it is narrow-mind- ed to focus only on how many new drugs get licensed and how quickly they achieve licensure. 1 I fully agree that this simplified view of clinical translation tends to increase the temptation to cut corners, lower a bar that is already low, and encourage the adoption of new treat- ments without sufficiently reliable data on efficacy—we almost never have sufficiently reliable data on safety at the time when drugs get licensed anyhow. It is disappointing that leaders at both regulatory and public research fund- ing organizations are under such pressure to portray bio- medical research as a success story–producing machine.

Tenuous success stories are indispensable for companies to make money from and for the media to sensationalize on, but not for science. Kimmelman and London show that clinical translation should be judged on its ability to generate as comprehensive an intervention ensemble as possible for the tested interventions. This may include many “negative” studies and other aspects of trial-and- error in the tortuous nonlinear process of trying to under- stand what works and what does not. “Negative” results are very informative. They can correct mechanistic and other “basic” science misconceptions and help define the optima for new interventions in terms of dose, setting, population, and other parameters that shape best use at the clinical and population level.

I would like to extend Kimmelman and London’s posi- tion in two ways. First, I would argue that in the current environment, failures should be seen not just as accept- able, but probably as the most useful outcomes that trans- lational research efforts can offer. Failures are probably more important than successes. Among failures I include studies with “negative” results that show that large lines of preclinical and early clinical investigation are not fruitful and should be abandoned or at least radically modified. I also have in mind later-stage clinical trials with “negative” results that modulate our understanding about which among several already-known interventions may not have as much merit as we thought, such as when we discover that interventions that are already licensed and widely ad- opted should actually be used in a more limited fashion or should be totally discarded. Hype is rampant nowadays both in the “basic” bio- medical sciences and in clinical research. Several investi- gative fields are fueled with resources mostly because of inertia, expressed by self-promoting study sections whose members do not want to admit that they should better quit their uninformative minutiae. Well-done research John P. A. Ioannidis, “Translational Research May Be Most Successful When It Fails,” Hastings Center Report 45, no. 2 (2015), 39-40. DOI:

10.1002/hast.429 March-April 2015 40 HASTINGS CENTER REPORT showing that such large sectors of investigation are prob- ably fruitless should be promoted and the “negative” re- sults celebrated, since they free up science, scientists, and resources for more interesting and potentially more useful pursuits. Such enhanced accountability would also help make a stronger case to policy-makers and the general public for strengthening the budget for science, since it proves a strong commitment to impartiality.Moreover, at the clinical research end of the spectrum, given that adoption of many interventions in medicine in the past has been done with mostly weak requirements for good-quality evidence, it should not be surprising that many of the adopted interventions and practices are not actually effective and that some may be more harm- ful than helpful. Whenever tested in well-designed tri- als, about half of already-adopted medical practices are proven to be useless or harmful. 2 These medical reversals should also be cause for celebration. In the current envi- ronment, claiming yet another major discovery has be- come quite boring 3 (millions of papers claim discoveries), and licensing yet another drug is almost certainly going to cause little or no benefit. Translational research efforts may be more useful to patients and public health when they fail than when they succeed. Second, an intervention ensemble probably cannot be generated with information only about the drug or drugs produced by a single company. For most conditions and diseases, there are already a large number of other inter- ventions whose use is supported or contradicted by various levels of evidence. Many different commercial sponsors may be manufacturing or developing drugs for the same or overlapping indications. The current paradigm evalu- ates these drugs in isolation. Each one of them has to con- vince regulators that it is good enough to be licensed. But the essential question is not whether it is good enough in an abstract absolute way, but how it performs relative to other competing options for the same indication. This means that comparative effectiveness and comparative harms information is of paramount importance for reach- ing conclusions about whether an intervention should be used or not, how often, and under what circumstances.

Empirical studies show that companies run clinical re- search agendas that focus entirely on their own products, even though this isolation has nothing to do with real- ity. 4 Head-to-head comparisons are relatively uncommon in clinical trials. Even worse, head-to-head noninferior- ity trials (which aim to show that a novel intervention is not clinically worse than a proven one) almost always show favorable results for the experimental intervention, but these results probably do not simply reflect the mer- its of the experimental interventions. They most likely reflect manipulations in the choice of the study design and analysis in such a way that favorable results (or at least favorable interpretations, allowing for some spin) can be secured. 5 On top of this, heavy infiltration of the evidence-based machinery by the industry (through sponsoring and authoring of meta-analyses, guidelines, and cost-effectiveness analyses) creates a bubble literature that suggests almost all treatments have rosy outcomes and pose little threat to safety. 6 Again, the real needs un- der these circumstances are trials and integrated views of evidence that show that some treatments are failures and some are more prominent failures than others. In order to achieve this transformation, we need a very different mindset than the one currently pervasive among regulatory agencies and public funders, which try to jus- tify their existence by claiming they are licensing and sup- porting more and faster discoveries. Incentives should be in place for celebrating “negative” results; studies that de- finitively burst “basic” science bubbles; and independent clinical research by stakeholders who are not sponsored by the industry, do not have conflicts of interest, and do not have to apologize for getting “negative” results, if the results are indeed “negative.” Regulators, funders, jour- nals, and the general public may all help generate a frame- work in which both successes and failures are celebrated.

For example, instead of getting money from the industry to maintain their operations, regulatory agencies may re- quest funds from the industry to support the design and conduct of informative trials by nonconflicted research- ers. 7 The industry would gain money because they would no longer need to run all these trials themselves. Public research funders may promote “basic” research that iden- tifies which of the “basic” research avenues have become self-serving and uninformative, and they may channel more of the funding to other avenues. Journals could pri- oritize the publication of high-quality studies regardless of results and, other things being equal, even prioritize “negative” results over “positive” ones, as I have long ar- gued. We should all demand failures acknowledged for whatever has failed.

1. J. Kimmelman and A. London, “The Structure of Clinical Translation: Efficiency, Information, and Ethics,” Hastings Center Report 45, no. 2 (2013): 27-39. 2. V. Prasad et al., “A Decade of Reversal: An Analysis of 146 Contradicted Medical Practices,” Mayo Clinic Proceedings 88, no. 8 (2013): 790-8; V. Prasad, A. Cifu, and J. P. Ioannidis, “Reversals of Established Medical Practices: Evidence to Abandon Ship,” Journal of the American Medical Association 307, no. 1 (2012): 37-38. 3. J. P. Ioannidis, “Discovery Can Be a Nuisance, Replication Is Science, Implementation Matters,” Frontiers in Genetics 4 (2013):

33. 4. D. Lathyris et al., “Industry Sponsorship and Selection of Comparators in Clinical Trials,” European Journal of Clinical Inves- tigation 40, no. 2 (2010): 172-82. 5. I. Boutron et al., “Reporting and Interpretation of Random- ized Controlled Trials with Statistically Nonsignificant Results for Primary Outcomes,” Journal of the American Medical Association 303, no. 20 (2010): 2058-64. 6. B. Goldacre, Bad Pharma: How Drug Companies Mislead Doc- tors and Harm Patients (London: Fourth Estate, 2012). 7. J. P. Ioannidis, “Mega-Trials for Blockbusters,” Journal of the American Medical Association 309, no. 3 (2013): 239-40.

March-April 2015 Copyright ofHastings CenterReport isthe property ofWiley- Blackwell anditscontent may not becopied oremailed tomultiple sitesorposted toalistserv without thecopyright holder's express writtenpermission. However,usersmayprint, download, oremail articles for individual use.