PDF Archive

Easily share your PDF documents with your contacts, on the Web and Social Networks.

Share a file Manage my documents Convert Recover PDF Search Help Contact



NEJMsr1601330 .pdf



Original filename: NEJMsr1601330.pdf
Title: Update on Trial Registration 11 Years after the ICMJE Policy Was Established
Author: Zarin Deborah A., Tse Tony, Williams Rebecca J., Rajakannan Thiyagu

This PDF 1.3 document has been generated by Adobe InDesign CS6 (Macintosh) / Adobe PDF Library 10.0.1; modified using iText 4.2.0 by 1T3XT, and has been sent on pdf-archive.com on 07/02/2017 at 17:51, from IP address 78.31.x.x. The current document download page has been viewed 333 times.
File size: 128 KB (9 pages).
Privacy: public file




Download original PDF file









Document preview


The

n e w e ng l a n d j o u r na l

of

m e dic i n e

Spe ci a l R e p or t

Update on Trial Registration 11 Years after the ICMJE Policy
Was Established
Deborah A. Zarin, M.D., Tony Tse, Ph.D., Rebecca J. Williams, Pharm.D., M.P.H.,
and Thiyagu Rajakannan, Ph.D.
Laws and policies to establish a global trial reporting system have greatly increased the transparency and accountability of the clinical research
enterprise. The three components of the trial
reporting system are trial registration, reporting of aggregate results, and sharing of individual participant data.1 Trial registration is foundational to our understanding and interpretation
of trial results, because it requires that information be provided about all relevant clinical trials
(to put results in a broad context) and their prespecified protocol details (to ensure adherence
to the scientific plan).
In this article, we describe the current trial
registration landscape and summarize evidence of
its effect on the clinical research enterprise to date.
We then present the results of analyses that were
performed with the use of ClinicalTrials.gov
data to provide additional evidence regarding the
degree to which current practices are fulfilling
certain key goals initially envisioned for trial
registration. Finally, we identify challenges and
suggest potential responses for the next decade.

Ke y Goal s of Trial Regis tr ation
in the Trial Rep or ting S ys tem
Trial registration involves the submission of descriptive information about a clinical trial to a
publicly accessible, Web-based registry. Two key
goals underlie the registration requirements.
The first goal is to establish a publicly accessible
and searchable database for disseminating a
minimum set of structured information about
all ongoing and completed trials. Trial registries
are designed to publicly document all biomedical or health-related experiments involving humans, facilitate the identification of trials for
potential participants, and permit the incorpora-

tion of clinical research findings into the medical evidence base. The second goal is to provide
access to date-stamped protocol amendments
that occur during the trial. Access to structured
archival information allows the public to track
the progress of individual studies and assess
whether reported results are consistent with the
prespecified protocol or statistical analysis plan.

E volu tion of the Glo b al Trial
Rep or ting S ys tem
After the announcement of the International
Committee of Medical Journal Editors (ICMJE)
trial registration policy2 in September 2004, a
series of related laws and policies were implemented in the United States3 and internationally 4
that increased the scope and content of mandatory prospective trial registration. The World
Health Organization International Clinical Trials
Registry Platform established the Trial Registration Data Set standard,5 which is the minimum
set of data to be provided during trial registration, and continues to coordinate a global network of trial registries (Table 1). To address
biases in results disclosure, which are well documented in the published literature,6-8 governing
bodies and organizations subsequently enacted
laws and policies requiring the systematic reporting of aggregate results in publicly accessible
results databases. In the United States, the Food
and Drug Administration Amendments Act of
2007 (FDAAA) established a legal mandate requiring those responsible for initiating certain
clinical trials of drugs, biologics, and devices to
register the trials and report summary results.9
In response, the National Institutes of Health
(NIH) launched the ClinicalTrials.gov results database in September 2008.10 In September 2016,

n engl j med 376;4 nejm.org  January 26, 2017

383

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

The

n e w e ng l a n d j o u r na l

of

m e dic i n e

Table 1. International Trial Registration Landscape.
Trial Registry
Australian New Zealand Clinical Trials Registry (ANZCTR)

Total No. of Trials Registered
(% overlap with ClinicalTrials.gov)*

Year
Launched

11,703 (1.9)

2005

Brazilian Clinical Trials Registry (ReBec)

746 (2.7)

2010

Chinese Clinical Trials Registry (ChiCTR)

7,927 (0.3)

2007

Clinical Research Information Service, Republic of Korea (CRiS)

1,771 (11.6)

2010

208,822 (100)

2000

6,562 (14.0)

2007

ClinicalTrials.gov†
Clinical Trials Registry–India (CTRI)
Cuban Public Registry of Clinical Trials (RPCEC)

207 (0)

2007

27,380 (33.2)

2004

German Clinical Trials Register (DRKS)

4,293 (29.1)

2008

Iranian Registry of Clinical Trials (IRCT)

9,770 (0.5)

2008

ISRCTN Registry‡

14,364 (6.3)

2000

Japan Primary Registries Network (JPRN)

22,652 (4.1)

2008

European Union Clinical Trials Register (EU-CTR)†

Thai Clinical Trials Registry (TCTR)
The Netherlands National Trial Register (NTR)
Pan African Clinical Trial Registry (PACTR)
Sri Lanka Clinical Trials Registry (SLCTR)
WHO ICTRP registries§

598 (1.0)

2009

5,422 (1.3)

2004

614 (3.4)

2009

187 (0.5)

2006

323,018 (64.4)

2007

* Data through March 7, 2016, were collected. Overlap of ClinicalTrials.gov data with data from other registries was identified by means of the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) search
portal with the use of matched secondary identifying numbers listed on trial records.
† The registry includes a results database.
‡ The ISRCTN registry was formerly known as the International Standard Randomised Controlled Trial Number registry.
§ The WHO ICTRP registries are the registries listed in the table except for ClinicalTrials.gov.

the Department of Health and Human Services
promulgated regulations that implemented, clarified, and expanded the legal requirements for
trial registration and results submission under
the FDAAA.11,12 The NIH simultaneously issued a
policy requiring trial registration and results
reporting for all clinical trials funded by the
NIH, regardless of whether those actions were
legally required under the FDAAA.13
As of October 2016, ClinicalTrials.gov contained more than 227,000 records, and nearly
23,000 of those records had posted results entries; we estimate that results are published in
the literature for only half those trials.10 ClinicalTrials.gov receives approximately 600 new trial
registrations and 100 new results submissions
per week, and it has approximately 170 million
page views per month. In the remainder of this
article, we present the results of analyses of data
from ClinicalTrials.gov, which contains two thirds
of total global trial registrations.
384

A sse ssment of Clinic alTrial s .gov
and the E volving Trial
Rep or ting S ys tem
Specific evaluation criteria for each of the two
key goals of trial registration are shown in Table 2. For example, the degree to which minimum trial information is publicly accessible can
be assessed with the use of several criteria, including the scope and coverage of registries and
registration policies, the completeness of registry data and the timeliness of submission, the accuracy of submitted information, and the usefulness of available data to the broader community.
For several criteria, published evidence is available to determine the degree to which the criteria
are being met, but for other criteria, more evidence is needed. To provide additional evidence,
we collected and analyzed recent Clinical­Trials
.gov data. Our efforts focused on three areas:
the timing of trial registration relative to trial

n engl j med 376;4 nejm.org  January 26, 2017

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

special report

initiation, the specificity and consistency of registered primary outcome measures relative to the
measures described in the protocol and published articles, and the use of registry data in
published research examining various aspects of
the clinical research enterprise.
Timing of Trial Registration

Public trial registration at trial initiation ensures
timely access to information about ongoing trials and precludes selective reporting (the first
key goal); it also provides documentation of information about the initial protocol, such as
prespecified outcome measures (the second key
goal). Comprehensive prospective trial registration is necessary to ensure that registered trials
(and ultimately, published trial results) are not
substantially biased by selection of favorable
outcomes and selective nonreporting of unfavorable outcomes. Although there is no direct
mechanism for the systematic identification of
unregistered trials, late registrations are a marker that stakeholders enable trials to proceed
without prospective registration.21,26
Our goal was to identify trials that were registered late. On March 18, 2015, we downloaded
49,856 ClinicalTrials.gov records for interventional studies (clinical trials) that had been registered during the 3-year period of 2012 through
2014. After excluding 105 records with missing
trial start dates, we sorted the 49,751 remaining
records into two categories: records received
before or within 3 months after the trial start
date (“on time”), and records received more than
3 months after the trial start date (“late”). We
also subcategorized records according to type of
funder, year received, and number of months
late (among those that were received late). We
chose to use a conservative definition of registration that occurred on time. The ICMJE policy
requires trial registration to occur before enrollment of the first participant (i.e., before the trial
start date), and the FDAAA requires trial registration to occur within 21 days after enrollment
of the first participant. ClinicalTrials.gov collects information on trial start date in a “month–
year” format.
Of the 49,751 trials included in our analysis,
16,342 (32.8%) were registered late. The rate of
late trial registration did not vary considerably
according to the year received, but some variation occurred according to type of funder: the

rate was 23.5% (3819 of 16,264 trials) among
trials with industry funding, 24.9% (775 of 3111)
among trials with NIH funding, and 38.7%
(11,748 of 30,376) among trials with funding
from academic, nonprofit, or other government
organizations. Of the trials that were registered
late, 57.0% (9321 of 16,342 trials) were registered on ClinicalTrials.gov more than 12 months
after the trial start date, with similar rates according to year and type of funder (see Table S1
in the Supplementary Appendix, available with
the full text of this article at NEJM.org).
Specificity and Consistency of Primary
Outcome Measures across Sources

The ICMJE policy2 requires the registration of
prespecified primary and secondary outcome
measures (the second key goal). To determine
whether current practices for registering outcome measures result in sufficient specificity to
permit evaluation of the fidelity of published
reports to the protocol, we assessed the specificity of registered primary outcome measures using a framework we described previously.10 We
also used the same data set to assess the consistency of the primary outcome measures across
corresponding protocols, registry records, and
published articles.
We identified 40 articles published in the New
England Journal of Medicine (extracted on September
16, 2015) and 40 articles published in the Journal
of the American Medical Association (extracted on
August 5, 2016) that reported the results of non–
phase 1 clinical trials for which full protocols
were available online and at least one ClinicalTrials.gov number was cited in the abstract.
Descriptions of the primary outcome measures
were extracted from the final version of the full
protocol, the version of the ClinicalTrials.gov
record that was displayed at the time of journal
publication, and the Methods section of the published article. We note that such information
could have been modified after the initial submission of the ClinicalTrials.gov record (e.g.,
on the basis of protocol changes or analytic decisions that occurred during or after trial completion) or the manuscript (e.g., on the basis of
feedback received during the peer-review process).
In our sample of 80 articles, we identified 83
trials (some articles reported the results of multiple trials) and 101 registered primary outcome

n engl j med 376;4 nejm.org  January 26, 2017

385

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

The

n e w e ng l a n d j o u r na l

of

m e dic i n e

Table 2. Key Goals of Trial Registration, Evaluation Criteria, and Related Evidence.
First key goal: Establish a publicly accessible and searchable database for disseminating a minimum set of structured
information about all ongoing and completed trials
Description and importance
Create a public record of all initiated trials
Enable search and retrieval of registered trials of interest by different users (e.g., potential participants or researchers)
Allow for tracking and assessment of bias in results reporting by elucidating the denominator (i.e., the full set of relevant
trials regardless of whether the results of any particular trial are publicly accessible)
Inform the need for new trials, thereby avoiding unnecessary and unintentional duplication
Evaluation criteria and related evidence
Scope and coverage of registered trials
Approximately 600 new trials per week are registered at ClinicalTrials.gov, which contains nearly 227,000 trial records
(as of Oct. 2016)
Total no. of initiated, ongoing, and completed trials worldwide is unknown
WHO ICTRP search portal listed 323,018 trial records from 16 trial registries (as of March 7, 2016); 15,808 records
were identified as duplicates (i.e., included in two or more registries)
Of the 307,210 unique trial records identified on the WHO ICTRP search portal, 208,665 (68%) were registered
on ClinicalTrials.gov
Recent estimates suggest additional duplicates have not been detected on the WHO ICTRP search portal14
Unidentified duplicates create residual ambiguity in the attempt to ascertain a definitive list of all trials on a given
topic
Completeness of registered data and timeliness of submission
Of nearly 600 new trials per week, over half are registered before the listed trial start date
Many registry entries are incomplete, out-of-date, or have not been updated recently15
The recruitment status of nearly 21,000 ClinicalTrials.gov records is unknown (i.e., listed as “Recruiting,” “Not
yet recruiting,” or “Active, not recruiting” but not verified for ≥2 yr) (as of Oct. 2016)
Some journals require trial registration, reject manuscripts associated with unregistered trials,16,17 and link the registry number in an article to PubMed
Nearly 42,000 published articles are indexed in Medline with a unique ClinicalTrials.gov number (as of Oct. 2016)
Some journals do not require trial registration18,19 or do not publish the trial registry number, which interferes with
the ability to link between registry records and publications
Many trials have been registered retrospectively (after the trial start date)20,21
Our data show that approximately one third of trials initially submitted to ClinicalTrials.gov during a 3-year period were
registered more than 3 mo after the listed trial start date, and a large proportion of these trials were registered more
than 12 mo after the start date
Usefulness of registered information
Potential participants can either identify relevant trials directly or use a site that downloads and makes ClinicalTrials.gov
data available for select audiences (e.g., www.breastcancertrials.org)
Many funders and sponsors (e.g., National Institutes of Health, Centers for Medicare and Medicaid Services,
Department of Veterans Affairs, and Patient-Centered Outcomes Research Institute) have promulgated trial
­registration requirements,13,22-24 but the way in which registry information has been used to inform funding or
other decisions is unclear
Our data show that ClinicalTrials.gov data have been used in research articles examining the clinical research enterprise
Second key goal: Provide access to date-stamped protocol amendments that occur during the trial
Description and importance
Ensure listing of all prespecified primary outcome measures and secondary outcome measures, as well as other trial-­
design features
Display outcome measures with sufficient detail to allow for detection of unacknowledged changes through public audit

386

n engl j med 376;4 nejm.org  January 26, 2017

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

special report

Table 2. (Continued.)
Evaluation criteria and related evidence
Detection of incomplete or inadequate registered information
Many, but not all, registry records for clinical trials whose results are published in journals contain all 20 items of the
WHO ICTRP Trial Registration Data Set25,26
Sufficient specificity of registered outcome measures
In the past, outcome measures were registered with low specificity,10 which interfered with the ability to detect deviations from prespecified protocol or subsequent amendments
Our data show that, more recently, the level of specificity for registered primary outcome measures appears to have increased
Detection of infidelity or inconsistency between registry information and data from other sources
Readers can use ClinicalTrials.gov to identify discrepancies between published and prespecified outcome measures27
Editors and peer reviewers do not always check for or detect such changes28-30
Studies comparing registry information with data in protocols and publications show broad consistency but note
some instances of inconsistency27,31-35
Our data show that primary outcome measures reported across registries, protocols, and publications were largely consistent, but we noted several confounding issues that allow room for post hoc selection of a specific outcome measure
for reporting

measures (some records listed more than one).
We determined the rates at which the primary
outcome measures met certain specificity criteria: 0% had a domain only (e.g., anxiety), 11.9%
had a specific measurement (e.g., score on the
Hamilton Anxiety Rating Scale), 42.6% had a
specific metric (e.g., change from baseline),
45.6% had a method of aggregation (e.g., mean),
and 94.1% had a specific time frame (e.g., 52
weeks).
We identified only two instances in which
there were apparent inconsistencies in the published primary outcome measures among the
three sources (Table S2 in the Supplementary
Appendix). One article36 included pooled data from
two studies registered with different primary outcome measures; the registered primary outcome
measure in one trial record (ClinicalTrials.gov
number, NCT01605136) matched the primary
outcome measure reported in the article, whereas the registered primary outcome measure in the
other trial record (NCT00979745) was reported
as a secondary outcome measure in the article.
The second instance involved a discrepancy
between the article37 and the registry entry
(NCT01680744) with respect to the described
primary outcome measure and analysis population (i.e., with the outcome pertaining to recipients of kidney transplants in the article vs. kidney

donors in the registry entry). The descriptions
of the remaining 99 primary outcome measures
seemed to be consistent across sources, although
differences in the level of detail provided for
definitions, criteria, or both made it difficult in
some cases to confirm whether measures were
truly identical. For example, the meaning of
“progression-free survival” is critically dependent
on the criteria used to determine “progression.”
It is not possible to assess consistency if only
one source provides those criteria. We also noted
poor or inconsistent reporting of time frames,
especially for time-to-event measures (Table S3
in the Supplementary Appendix).
Use of ClinicalTrials.gov Data in Published
Research

Many researchers have used data from ClinicalTrials.gov to examine various aspects of the
clinical research enterprise (i.e., to perform metaresearch). To more precisely understand the nature of such uses and to evaluate the degree to
which ClinicalTrials.gov data are meeting the
needs of meta-researchers, we conducted a preliminary evaluation of the published literature
using PubMed (Table S4 in the Supplementary
Appendix).
In our search, we retrieved 339 research articles and 1218 systematic reviews published be-

n engl j med 376;4 nejm.org  January 26, 2017

387

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

The

n e w e ng l a n d j o u r na l

tween 2010 and 2015 that used data from the
ClinicalTrials.gov registry, results database, or
both. The number of research articles increased
from 24 in 2010 to 94 in 2014. We reviewed each
research article and categorized it in one of the
following six broad areas: characterization of
clinical research on specific conditions (151 articles [45%]); ethics, adverse-events reporting, data
mining, and other topics (44 [13%]); assessment
of the quality of registered data and consistency
with policies on registration and results reporting (43 [13%]); characterization of the overall
clinical research landscape (41 [12%]); evaluation of publication bias or selective reporting (34
[10%]); and assessment of specific researchrelated methods and issues (26 [8%]). (Examples
of each type of article can be found in Table S4
in the Supplementary Appendix.)

Discussion
The ICMJE trial registration policy instigated a
cascade of events that have greatly expanded and
transformed the trial reporting system.1 Before
2004, most investigators did not register their
trials, and no notion of a public summary results database existed. At that time, readers and
editors had no way of knowing whether trials
had unpublished results or whether the trial results reported in manuscripts accurately reflected
the trial protocols. Since implementation of the
ICMJE policy, trial registration (whether prospective or retrospective) and acceptance of the need
for structured summary results reporting and its
advantages have grown, with most industry sponsors and some academic institutions developing
infrastructures to help their investigators report
summary results.38 Analysis of ClinicalTrials.gov
data has informed policy and research discussions and has fueled, in part, the ongoing call
for sharing individual participant data and associated trial documents. However, gaps in the system and its associated policies (e.g., the lack of
legal reporting requirements for phase 1 trials)
and evidence of suboptimal adherence to the
policies and inadequate use of available tools
suggest that there is room for improvement.3
The recent issuance of the FDAAA final rule and
the NIH policy on trial reporting will fill some
of those gaps and will create a framework for
monitoring adherence, but considerable work
remains to be done.
388

of

m e dic i n e

For example, some funders, sponsors, and institutional review boards continue to allow unregistered trials or trials with late registration
to be conducted, and some journals continue to
allow the results of such trials to potentially be
published. This practice undermines the first
key goal of trial registration by interfering with
the processes designed to ensure that registries
contain a list of all initiated trials; if trials can
be registered late, then some trials may proceed
without ever being registered at all. We found
that approximately one third of trials across all
funder types were registered more than 3 months
after the trial start date, and a large proportion of
these trials were registered more than 12 months
after the trial start date. We are aware that some
late trial registrations are due to changes in organizational disclosure policies (e.g., Boehringer
Ingelheim registered 361 studies in 2014, some
dating back to 1990),39 but this positive movement does not explain the overall number of late
registrations across all funder types.
The use of time-stamped registry records to
assess the fidelity of published reports to the
trial protocol has vastly improved since 2004.
Requiring researchers to declare prespecified outcome measures and other study-design elements
as discrete, structured data elements enables the
tracking of each element and facilitates comparison across trials.
Motivated editors and reviewers can compare
published reports with the use of trial registry
entries, a process replicated in our consistency
analysis comparing the primary outcome measures described in publications and protocols
with those described in registry records. Although
the primary outcome measures were generally
consistent across sources, we observed variations in the level of specificity and differences in
the amount of detail provided about criteria or
definitions associated with a measure. Some
have noted the potential effect of differences in
the level of detail about definitions.40 It is difficult to determine which discrepancies reflect
benign variations in level of detail (e.g., using
“respiratory infection” as shorthand for “severe
lower respiratory infection”) and which mask
post hoc selection of particular subgroups of
participants. In addition, the lack of specificity
of a listed outcome measure, including the time
frame, leaves room for unacknowledged post
hoc analytic decisions. There seems to be a spe-

n engl j med 376;4 nejm.org  January 26, 2017

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

special report

Table 3. Suggested Actions by Stakeholder Groups for Improving the Trial Reporting System over the Next Decade.
Stakeholder Group

Actions for Improving the Trial Reporting System

Funders

Use ClinicalTrials.gov to identify gaps and potential overlaps in clinical research
before funding new trials; check the denominator (i.e., the full set of relevant
trials) by searching registries for relevant registered trials
Hold awardees accountable for accurate and timely reporting of all trials
Ensure that trial registration occurs before the trial start date
Ensure that trial registration has meaningful and specific entries

Institutional review boards

Ensure that ClinicalTrials.gov is used to identify past and ongoing trials that
might inform the need for and the potential risks and benefits of each new
proposed trial
Ensure that each new trial is properly registered so that potential and enrolled
participants can be assured that they are participating in a trial that will contribute to the medical knowledge base

Academic medical centers

Provide scientific leadership and institutional resources to support trial reporting by investigators38
Take institutional responsibility for ensuring that sponsored trials are reported
appropriately
Create educational resources and define best practices that support quality trial
documentation as part of training for clinical researchers
Create systems for providing academic incentives for high-quality trial reporting

Trialists

Before starting a trial, search for similar trials (both completed and ongoing) in
determining the necessity, feasibility, and proper design
Once the trial is designed and funded, register the trial with specificity, use a
unique trial registry number when communicating about the trial, and keep
registry records up to date
Once the trial is completed, take the time to submit accurate and complete
summary results

Journal editors and peer reviewers

Ensure that trial registration occurred before the trial start date
Ensure that trial registration has meaningful and specific entries
Verify that the data in the submitted manuscript are consistent with prespecified
protocol details from the registry and ensure that any discrepancies are explained45
Check the denominator by searching registries for relevant registered trials

Meta-researchers

Continue to use components of the trial reporting system (registration, results
reporting, and individual participant data) and other sources to characterize
and monitor the clinical research enterprise; use the information in systematic reviews of the evidence base
Pursue unanswered questions related to evaluation criteria and gaps in the published evidence in an effort to continually improve both the trial reporting
system and the clinical research enterprise

ClinicalTrials.gov and other trial regis- Continue to improve user interfaces to facilitate data submission, enhance help
tries and results databases
and resource materials, adapt to evolving clinical research approaches and
stakeholder needs, conduct training, provide one-on-one assistance for results submission, and evaluate and improve methods for curation
Continue to improve search interfaces to help users make the best possible use
of structured data, coordinate with other registries to improve the ability to
identify a unique list of trials, and facilitate access to trial registry data sets
for use by researchers and others

cial problem with respect to the reporting of time
frames for time-to-event measures in all three
source types, which perhaps reflects an underappreciation of the statistical importance of reporting this information41 (Table S3 in the Supplementary Appendix). The recently launched
COMPare (Centre for Evidence-Based Medicine
Outcome Monitoring Project), in which similar
assessments are conducted, has also revealed

difficulties in determining consistency (“equivalence”) when varying degrees of specificity are
provided in different sources.42 COMPare has
also called for each journal article to report all
prespecified outcome measures. Although this
might be impractical for studies with large numbers of prespecified outcome measures,10 an alternative approach has been used in which articles
are linked directly to full sets of summary results

n engl j med 376;4 nejm.org  January 26, 2017

389

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

The

n e w e ng l a n d j o u r na l

in a public registry such as ClinicalTrials.gov,
which allows the article to focus on findings of
particular interest while providing full transparency (e.g., see the FLAME trial [Effect of Indacaterol Glycopyronium versus Fluticasone Salmeterol on Chronic Obstructive Pulmonary Disease
Exacerbations; NCT01782326]).43
The lack of standards for structured protocols
allows for internal inconsistencies and uncertainty about key study-design features, which reinforces the importance of requiring registry
entries that reflect the prespecified scientific
plan accurately and unambiguously. It is our
sense that nonscientific personnel assigned to
submit trial registration information may have
trouble identifying the relevant information in
unstructured protocols, which may explain some
poor registry entries. Finally, in our analyses, we
examined the protocol and registry data available at the time of publication; there could certainly be a greater level of discrepancy among
the initial versions of these documents. We anticipate that the systematic posting of full protocols and statistical analysis plans, which is now
required at ClinicalTrials.gov under the FDAAA
final rule and the NIH policy, will allow the research community to discuss and eventually develop the consistent standards of specificity and
structure needed to help ensure the valid interpretation of reported results. Efforts to standardize protocols are already under way.44
ClinicalTrials.gov has become a critical resource for characterizing and evaluating the
clinical research enterprise. Nevertheless, innumerable opportunities remain for analyzing the
data more systematically to inform key decisions
by investigators, funders, institutional review
boards, and others. The next phase in the evolution of the trial reporting system requires concerted effort from all stakeholder groups in the
clinical trial ecosystem (Table 3). Full implementation of the FDAAA and the NIH policy is expected to enhance the scope and completeness
of trial reporting. However, there will always be
a gap between meeting the letter of the law and
the spirit of the law. For example, investigators
can meet the reporting requirements while providing minimally informative data; editors, fund­
ers, and others can go through the motions to
determine that a trial was registered without
actually using the information to assess the
quality of the published reports or to inform
390

of

m e dic i n e

their understanding of the results. Ultimately,
substantial improvements in trial reporting will
require changes in the values, incentives, and
scientific norms among institutions that conduct clinical trials and entities that use the results of clinical trials to inform medical and
policy decisions. Continued attention to trial
registration and summary results reporting is
critical particularly as the community considers
other endeavors, such as sharing individual participant data.1
Supported by the Intramural Research Program of the National Library of Medicine, National Institutes of Health.
Disclosure forms provided by the authors are available with
the full text of this article at NEJM.org.
We thank Drs. Kevin M. Fain and Heather D. Dobbins for assistance with data analysis.
From the National Library of Medicine, National Institutes of
Health, Department of Health and Human Services, Bethesda,
MD. Address reprint requests to Dr. Zarin at the National Library
of Medicine, Bldg. 38A, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894, or at ­dzarin@​­mail​.­nih​.­gov.
1. Zarin DA, Tse T. Sharing individual participant data (IPD)
within the context of the trial reporting system (TRS). PLoS Med
2016;​13(1):​e1001946.
2. De Angelis C, Drazen JM, Frizelle FA, et al. Clinical trial
registration: a statement from the International Committee of
Medical Journal Editors. N Engl J Med 2004;​351:​1250-1.
3. Weber WE, Merino JG, Loder E. Trial registration 10 years
on. BMJ 2015;​351:​h3572.
4. Gülmezoglu AM, Pang T, Horton R, Dickersin K. WHO facilitates international collaboration in setting standards for clinical
trial registration. Lancet 2005;​365:​1829-31.
5. Sim I, Chan AW, Gülmezoglu AM, Evans T, Pang T. Clinical
trial registration: transparency is the watchword. Lancet 2006;​
367:​1631-3.
6. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet 1991;​337:​867-72.
7. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal
R. Selective publication of antidepressant trials and its influence
on apparent efficacy. N Engl J Med 2008;​358:​252-60.
8. Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H Jr.
Publication bias and clinical trials. Control Clin Trials 1987;​8:​
343-53.
9. Food and Drug Administration Amendments Act of 2007:​
Public Law 110–85. September 27, 2007 (http://www​.gpo​.gov/​
fdsys/​pkg/​PLAW-110publ85/​pdf/​PLAW-110publ85​.pdf).
10. Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database — update and key issues. N Engl J Med
2011;​364:​852-60.
11. Final rule — clinical trials registration and results information submission. Fed Regist 2016;​81:​64981-5157 (https:/​/​w ww​
.federalregister​.gov/​documents/​2016/​09/​21/​2016-22129/​clinical
-trials-registration-and-results-information-submission).
12. Zarin DA, Tse T, Williams RJ, Carr S. Trial reporting in Clinical­
Trials.gov — the final rule. N Engl J Med 2016;​375:​1998-2004.
13. Hudson KL, Lauer MS, Collins FS. Toward a new era of trust
and transparency in clinical trials. JAMA 2016;​316:​1353-4.
14. van Valkenhoef G, Loane RF, Zarin DA. Previously unidentified duplicate registrations of clinical trials: an exploratory
analysis of registry data worldwide. Syst Rev 2016;​5:​116.
15. Huić M, Marušić M, Marušić A. Completeness and changes
in registered data and reporting bias of randomized controlled

n engl j med 376;4 nejm.org  January 26, 2017

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.

special report

trials in ICMJE journals after trial registration policy. PLoS One
2011;​6(9):​e25258.
16. Drazen JM, Zarin DA. Salvation by registration. N Engl J Med
2007;​356:​184-5.
17. Durivage H. Clinical trials disclosure slide presentation —
slide 9:​ICMJE consequences of not registering trials. New Haven,
CT:​Yale School of Medicine, March 13, 2012 (http://ycci​.yale​
.edu/​Durivage-Godlew-ctgov-2012-03-13_119069_284_5​.pdf).
18. Hooft L, Korevaar DA, Molenaar N, Bossuyt PM, Scholten RJ.
Endorsement of ICMJE’s clinical trial registration policy: a survey among journal editors. Neth J Med 2014;​72:​349-55.
19. Wager E, Williams P. “Hardly worth the effort”? Medical
journals’ policies and their editors’ and publishers’ views on
trial registration and publication bias: quantitative and qualitative study. BMJ 2013;​347:​f 5248.
20. Harriman SL, Patel J. When are clinical trials registered? An
analysis of prospective versus retrospective registration. Trials
2016;​17:​187.
21. Scott A, Rucklidge JJ, Mulder RT. Is mandatory prospective
trial registration working to prevent publication of unregistered
trials and selective outcome reporting? An observational study
of five psychiatry journals that mandate prospective clinical
trial registration. PLoS One 2015;​10(8):​e0133718.
22. Centers for Medicare & Medicaid Services. Guidance for the
public, industry, and CMS staff:​coverage with evidence development. November 20, 2014 (https:/​/​w ww​.cms​.gov/​medicare
-coverage-database/​details/​medicare-coverage-document-details​
.aspx?MCDId=27).
23. Department of Veterans Affairs Office of Research and Development. ORD sponsored clinical trials:​registration and submission of summary results. 2015 (http://www​.research​.va​.gov/​
resources/​ORD_Admin/​clinical_trials/​).
24. Patient-Centered Outcomes Research Institute. PCORI’s process for peer review of primary research and public release of
research findings. February 24, 2015 (http://www​
.pcori​
.org/​
sites/​default/​f iles/​PCORI-Peer-Review-and-Release-of-Findings
-Process​.pdf).
25. Killeen S, Sourallous P, Hunter IA, Hartley JE, Grady HL.
Registration rates, adequacy of registration, and a comparison
of registered and published primary outcomes in randomized
controlled trials published in surgery journals. Ann Surg 2014;​
259:​193-6.
26. Viergever RF, Karam G, Reis A, Ghersi D. The quality of
registration of clinical trials: still a problem. PLoS One 2014;​
9(1):​e84727.
27. Zarin DA, Tse T. Trust but verify: trial registration and determining fidelity to the protocol. Ann Intern Med 2013;​159:​65-7.
28. Pranić S, Marušić A. Changes to registration elements and
results in a cohort of Clinicaltrials.gov trials were not reflected
in published articles. J Clin Epidemiol 2016;​70:​26-37.
29. van Lent M, IntHout J, Out HJ. Differences between information in registries and articles did not influence publication acceptance. J Clin Epidemiol 2015;​68:​1059-67.

30. Weston J, Dwan K, Altman D, et al. Feasibility study to ex-

amine discrepancy rates in prespecified and reported outcomes
in articles submitted to The BMJ. BMJ Open 2016;​6(4):​e010075.
31. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL,
Williamson PR. Comparison of protocols and registry entries to
published reports for randomised controlled trials. Cochrane
Database Syst Rev 2011;​1:​MR000031.
32. Zhang S, Liang F, Li W, Tannock I. Comparison of eligibility
criteria between protocols, registries, and publications of cancer
clinical trials. J Natl Cancer Inst 2016 May 25 (Epub ahead of
print).
33. Fleming PS, Koletsi D, Dwan K, Pandis N. Outcome discrepancies and selective reporting: impacting the leading journals?
PLoS One 2015;​10(5):​e0127495.
34. Jones CW, Keil LG, Holland WC, Caughey MC, Platts-Mills
TF. Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC Med 2015;​
13:​282.
35. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009;​302:​977-84.
36. Langendonk JG, Balwani M, Anderson KE, et al. Afamelanotide for erythropoietic protoporphyria. N Engl J Med 2015;​
373:​48-59.
37. Niemann CU, Feiner J, Swain S, et al. Therapeutic hypothermia in deceased organ donors and kidney-graft function. N Engl
J Med 2015;​373:​405-14.
38. O’Reilly EK, Hassell NJ, Snyder DC, et al. ClinicalTrials.gov
reporting: strategies for success at an academic health center.
Clin Transl Sci 2015;​8:​48-51.
39. Boehringer Ingelheim. Policy on transparency and publication of clinical study data (http://trials​.boehringer-ingelheim​
.com/​t ransparency_policy/​policy​.html).
40. Hudis CA, Barlow WE, Costantino JP, et al. Proposal for
standardized definitions for efficacy end points in adjuvant
breast cancer trials: the STEEP system. J Clin Oncol 2007;​25:​
2127-32.
41. Altman DG, De Stavola BL, Love SB, Stepniewska KA.
­Review of survival analyses published in cancer journals. Br J
Cancer 1995;​72:​511-8.
42. The COMPare Trials Project. Tracking switched outcomes in
clinical trials (http://compare-trials​.org/​).
43. Wedzicha JA, Banerji D, Chapman KR, et al. Indacaterol–
glycopyrronium versus salmeterol–fluticasone for COPD. N Engl
J Med 2016;​374:​2222-34.
44. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann
Intern Med 2013;​158:​200-7.
45. Doshi P. Is this trial misreported? Truth seeking in the burgeoning age of trial transparency. BMJ 2016;​355:​i5543.
DOI: 10.1056/NEJMsr1601330
Copyright © 2017 Massachusetts Medical Society.

early job alert service available at the nejm careercenter

Register to receive weekly e-mail messages with the latest job openings
that match your specialty, as well as preferred geographic region,
practice setting, call schedule, and more. Visit the NEJM CareerCenter
at NEJMjobs.org for more information.

n engl j med 376;4 nejm.org  January 26, 2017

391

The New England Journal of Medicine
Downloaded from nejm.org at ASSISTANCE PUBLIQUE HOPITAUX PARIS on January 31, 2017. For personal use only. No other uses without permission.
Copyright © 2017 Massachusetts Medical Society. All rights reserved.


Related documents


nejmsr1601330 1
pc short bio 2016
osteosarcoma clinical trials
aarkstore dengue global clinical trials review h1 2013
doucumetsoftrials
chemo


Related keywords