Featured Clinical Reviews

  • Screening for Atrial Fibrillation: US Preventive Services Task Force Recommendation Statement JAMA Recommendation Statement January 25, 2022
  • Evaluating the Patient With a Pulmonary Nodule: A Review JAMA Review January 18, 2022
  • Download PDF
  • CME & MOC
  • Share X Facebook Email LinkedIn
  • Permissions

Sequential, Multiple Assignment, Randomized Trial Designs

  • 1 Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor
  • 2 Survey Research Center, Institute for Social Research, University of Michigan, Ann Arbor
  • 3 Department of Statistics, University of Michigan, Ann Arbor
  • JAMA Guide to Statistics and Methods Collider Bias Mathias J. Holmberg, MD, MPH, PhD; Lars W. Andersen, MD, MPH, PhD, DMSc JAMA
  • Special Communication Reporting of Factorial Randomized Trials Brennan C. Kahan, PhD; Sophie S. Hall, PhD; Elaine M. Beller, MAppStat; Megan Birchenall, BSc; An-Wen Chan, MD, DPhil; Diana Elbourne, PhD; Paul Little, MD; John Fletcher, MPH; Robert M. Golub, MD; Beatriz Goulao, PhD; Sally Hopewell, DPhil; Nazrul Islam, PhD; Merrick Zwarenstein, MBBCh, PhD; Edmund Juszczak, MSc; Alan A. Montgomery, PhD JAMA
  • Original Investigation Comparison of Teleintegrated Care and Telereferral Care for Treating Complex Psychiatric Disorders in Primary Care John C. Fortney, PhD; Amy M. Bauer, MS, MD; Joseph M. Cerimele, MPH, MD; Jeffrey M. Pyne, MD; Paul Pfeiffer, MD; Patrick J. Heagerty, PhD; Matt Hawrilenko, PhD; Melissa J. Zielinski, PhD; Debra Kaysen, PhD; Deborah J. Bowen, PhD; Danna L. Moore, PhD; Lori Ferro, MHA; Karla Metzger, MSW; Stephanie Shushan, MHA; Erin Hafer, MPH; John Paul Nolan, AAS; Gregory W. Dalack, MD; Jürgen Unützer, MPH, MD JAMA Psychiatry

An adaptive intervention is a set of diagnostic, preventive, therapeutic, or engagement strategies that are used in stages, and the selection of the intervention at each stage is based on defined decision rules. At the beginning of each stage in care, treatment may be changed by the clinician to suit the needs of the patient. Typical adaptations include intensifying an ongoing treatment or adding or switching to another treatment. These decisions are made in response to changes in the patient’s status, such as a patient’s early response to, or engagement with, a prior treatment. The patient experiences an adaptive intervention as a sequence of personalized treatments.

Read More About

Kidwell KM , Almirall D. Sequential, Multiple Assignment, Randomized Trial Designs. JAMA. 2023;329(4):336–337. doi:10.1001/jama.2022.24324

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts
  • Open access
  • Published: 05 August 2020

Who knew? The misleading specificity of “double-blind” and what to do about it

  • Thomas A. Lang   ORCID: orcid.org/0000-0002-7482-7727 1 &
  • Donna F. Stroup   ORCID: orcid.org/0000-0003-1699-4671 2  

Trials volume  21 , Article number:  697 ( 2020 ) Cite this article

15k Accesses

18 Citations

17 Altmetric

Metrics details

In randomized trials, the term “double-blind” (and its derivatives, single- and triple-blind, fully blind, and partially blind or masked) has no standard or widely accepted definition. Agreement about which groups are blinded is poor, and authors using these terms often do not identify which groups were blinded, despite specific reporting guidelines to the contrary. Nevertheless, many readers assume—incorrectly—that they know which groups are blinded. Thus, the term is ambiguous at best, misleading at worst, and, in either case, interferes with the accurate reporting, interpretation, and evaluation of randomized trials. The problems with the terms have been thoroughly documented in the literature, and many authors have recommended that they be abandoned.

We and our co-signers suggest eliminating the use of adjectives that modify “blinding” in randomized trials; a trial would be described as either blinded or unblinded. We also propose that authors report in a standard table which groups or individuals were blinded, what they were blinded to, how blinding was implemented, and whether blinding was maintained. Individuals with dual responsibilities, such as caregiving and data collecting, would also be identified. If blinding was compromised, authors should describe the potential implications of the loss of blinding on interpreting the results.

“Double blind” and its derivatives are terms with little to recommend their continued use. Eliminating the use of adjectives that impart a false specificity to the term would reduce misinterpretations, and recommending that authors report who was blinded to what and how in a standard table would require them to be specific about which groups and individuals were blinded.

Peer Review reports

Background: problems with the term “double-blind”

The single biggest problem in communication is the illusion that it has taken place. George Bernard Shaw

In reports of randomized trials, the use of the term “double-blind” and its derivatives (single- triple-blind, fully blind, and partially blind or masked) is commonly understood to indicate that two groups participating in the trial are kept unaware of which participants are receiving the experimental intervention and which are receiving the control intervention [ 1 , 2 , 3 , 4 , 5 , 6 ].

Despite its long and widespread use, however, the term has several problems.

It is ambiguous

Agreement about which groups are blinded in a double-blind trial is poor [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ]. For example, in one study, 91 physicians reported 17 unique combinations of groups (often more than two) that they believed were blinded in a double-blind trial (Table  1 ), and 25 textbooks contained 9 unique combinations [ 1 ]. Another study of 25 “double-blind trials” published in 16 leading journals identified 5 different combinations of participants, assessors, caregivers, and statisticians as being blinded [ 14 ]. Identifying groups in general terms (e.g., investigators, caregivers) is also ambiguous [ 4 ], especially when individuals have dual roles, such as collecting data and assessing outcomes [ 2 , 4 , 5 , 6 ].

It is often uninformative

Even when using the term in an article, many authors do not identify which groups were blinded or how blinding was implemented [ 1 , 2 , 3 , 4 , 5 , 6 , 9 , 11 , 12 , 14 , 16 , 17 ]. Among 83 published trials reported as being double-blind, 41 did not identify any group as being blinded [ 9 ]. Without this information, “readers should remain skeptical about [blinding’s] effect on bias reduction.” [ 2 ].

It can be misleading

Many readers assume—incorrectly—that they know which groups are blinded in a double-blind trial (Table  1 ) [ 2 , 3 , 4 , 5 , 11 , 15 , 16 ]. Unfortunately, grossly inadequate reporting allows this assumption to go unchallenged when the article is read. (However, several studies have found that many published trials do not include the details of blinding, even when blinding was adequately implemented [ 4 ].) In 88 (70%) of 126 registered anesthesia trials, the groups or individuals reported to be blinded in the published results differed from those listed in the corresponding protocols [ 16 ].

It is inadequate

The suggestion to establish explicit definitions for the term [ 7 , 18 ] is complicated by the fact that several groups or individuals can be blinded. Limiting “double-blind” to trials in which only 2 specific groups are blinded leaves other combinations without an equivalent term.

It is often confused with allocation concealment

In randomized trials, the allocation schedule (the list indicating the group to which the next participant will be assigned, in random order) has to be kept secret to prevent group assignment from being manipulated. That is, allocation concealment minimizes selection bias before participants have been assigned to experimental groups, whereas blinding minimizes surveillance, expectation, and ascertainment bias after group assignment. Many readers are not aware of this difference [ 2 , 5 , 6 , 8 , 12 , 13 , 15 , 18 , 19 , 20 ], perhaps because the terms “allocation” and “blinding” indicate neither the similarities nor the differences between the concepts.

It is often mistakenly believed to be required in a randomized trial and to be essential to the trial’s validity [ 1 , 2 , 5 , 11 , 13 , 15 , 16 , 19 , 20 , 21 ]

“A randomised trial can be methodologically sound and not be double blind or, conversely, double blind and not methodologically sound.” [ 2 ]. Said another way, “Let us examine the placebo somewhat more critically, however, since it and ‘double blind’ have reached the status of fetishes in our thinking and literature. The Automatic Aura of Respectability, Infallibility, and Scientific Savoir-faire which they possess for many can be easily shown to be undeserved in certain circumstances.” [ 21 ].

In some situations, it can be confused with the condition of being without sight [ 2 , 5 , 12 , 20 , 22 , 23 ]

Some authors prefer “masking” to “blinding,” although the meaning of either term in a clinical trial may not be readily apparent to nonnative English speakers [ 18 , 22 ]. Further, some authors use the terms interchangeably [ 5 , 6 , 7 , 10 , 11 , 12 , 15 , 18 , 24 , 25 ], others insist that only masking be used [ 17 , 20 , 23 ], and still others insist that only blinding be used [ 2 , 5 , 22 ]. In addition, masking is sometimes used to describe how treatments are made indistinguishable [ 18 , 19 , 25 , 26 ], whereas blinding usually indicates which groups are unaware of treatment assignment [ 1 , 2 , 3 , 4 , 5 , 6 ]. Finally, searching the literature for “blinded,” “partially blind,” or “fully blind” randomized trials also identifies dozens of unwanted citations to the condition of being without sight.

It is unrealistic

The problem with trying to identify in a single term the groups who are blinded in a trial is that the number of pairs is potentially large. The literature identifies 11 groups or individuals who could be blinded: participants, care providers, data collectors and managers, trial managers, pharmacists [ 27 ], laboratory technicians [ 1 ], outcome assessors (who collect data on outcomes), outcome adjudicators (who confirm that an outcome meets established criteria), statisticians [ 2 , 4 , 6 , 11 , 12 , 13 ], and sometimes even members of data monitoring and safety committees [ 1 , 3 , 4 , 6 , 11 , 17 ] and manuscript writers [ 3 , 6 , 11 , 16 , 17 ]. These 11 groups can form 55 unique pairs. Even limiting the possibilities to 5 groups commonly recommended for blinding [ 15 , 28 ]—participants, care providers, data collectors, outcome assessors, and statisticians—leaves 10 possible combinations.

Proposed solutions

As near as we can tell, despite the above problems and several calls to abandon the term [ 1 , 5 , 6 , 9 , 11 , 12 , 16 , 28 ], only one substitute for double-blinding has been proposed in the literature: “subject- and assessor-blind” [ 29 ]. Aside from being somewhat awkward, the term assumes that double-blinding applies only to subjects and assessors, which, although reasonable, is not uniformly accepted.

The terms “fully blinded” or “partially blinded” do appear in the literature, but not as substitutes for substitutes for double-blinding or single-blinding [ 27 ]. Although both are used in randomized trials, they involve randomly assigning treatments, not groups, and can be applied to subsets of individuals within groups. For example, participants who could receive either an active drug or a placebo would be “fully blinded,” whereas participants who know they are receiving an active drug but not which one, would be “partially blinded.”

We considered blinding “assignment concealment [ 24 ]” because it accurately indicates that group assignment is what is hidden. It does not imply which groups are involved and has no history of doing so. It also eliminates the blinding-masking controversy and is not associated with other, less-technical meanings. Further, the relationship between blinding and “allocation concealment” is not apparent, but allocation concealment and assignment concealment are two sides of the same coin: they clearly indicate that two different components of the trial are concealed: the allocation schedule and group assignment, one term indicating group concealment before assignment and one after.

However, assignment concealment does not work well as a label. We concluded that “a concealed assignment trial” was unlikely to replace “a blinded trial.” Likewise, its use can be awkward: “group assignment was concealed from participants” was unlikely to replace “participants were blinded to treatment.” Further, as noted above, for better or worse, the mere use of the term “blinding” is widely considered to indicate study quality, and we concluded that authors would be unwilling to give up using this prized and familiar term. Finally, many people believed that “concealment” should be reserved for, or would be confused with, allocation concealment.

The term “blinding” is so firmly established that a simple substitute term, even if we could find one, is unlikely to be acceptable. Instead, we propose two changes in reporting trials described as blinded.

Our first proposal is to eliminate the use of adjectives that modify “blinded”: single-, double-, triple-, observer-, personnel-, rater-, observer-, fully or partially blinded, or any other qualifier that would make “blinded” seem more specific than it is. A trial would be described as either blinded or unblinded. Using “blinding” as a verb in a sentence would also be helpful. Such use encourages specificity by requiring a noun, usually which groups were blinded: “We blinded caregivers and data assessors” or “caregivers and data assessors were blinded.”

We wholeheartedly endorse the near-universal recommendation that authors report whether or not the trial was blinded [ 4 , 10 , 14 , 15 , 16 ], who was blinded [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 9 , 10 , 11 , 12 , 13 , 15 , 16 , 19 , 20 , 22 , 30 , 31 ], how they were blinded [ 2 , 4 , 5 , 6 , 12 , 13 , 19 , 20 , 26 , 30 , 31 ], and whether the method of blinding was likely to be successful [ 28 , 32 ], including the degree of similarity between the experimental and control interventions [ 31 ].

Accordingly, our second proposal is to have all trials described as blinded include the details in a standard “Who Knew” table (Table  2 ). This table has two parts: a required part and a supplemental part. The required part would indicate whether each of the 6 groups most commonly blinded (the person assigning participants to groups, participants, caregivers, data collectors and managers, outcome assessors, and statisticians) was or was not blinded , what information they were blinded to, how blinding was implemented, and whether blinding was maintined during the trial. The supplemental part, used when necessary, would present the same data for any other group or individual who was blinded. Individuals with dual responsibilities, such as caregiving and data collecting, would be identified in the same row heading. If blinding was compromised, authors should report the fact in the table and indicate in the text the potential implications that loss of blinding might have for interpreting the results.

Conclusions

“Blinding” as a concept to reduce bias has been used for more than 200 years [ 34 ], and “double-blind” as a term has been used in clinical trials for 70 years [ 35 ]. Even with the substantial support in the literature for abandoning its use, finding a simple, acceptable replacement seems unlikely. Instead, eliminating the use of adjectives that impart a false specificity to the term would reduce misinterpretations, and recommending that authors report who was blinded to what and how in a standard table would require them to be more specific about which groups and individuals were blinded.

Thomas A. Lang, MA

Principal, Tom Lang Communications and Training International

Adjunct Instructor, Medical Writing and Editing Program, University of Chicago Professional Education

Senior Editor, West China Hospital/Sichuan Medical School, Chengdu, China

Donna F. Stroup, PhD, MSc

Principal, Data for Solutions, Inc.

Co-signers (in alphabetical order):

Matthias Egger, MD, MSc, FFPH : Professor of Epidemiology and Public Health and former Director, Institute of Social and Preventive Medicine, University of Bern, and President, National Research Council, Swiss National Science Foundation. Former co-editor, International Journal of Epidemiology

Forough Farrokhyar, MPhil, PhD : Professor and Research Director, Department of Surgery, Department of Health, Evidence and Impact, McMaster University

Robert Fletcher, MD : Professor Emeritus of Population Medicine, Harvard Medical School; founding Co-Editor, Journal of General Internal Medicine ; former Co-Editor-in-Chief, Annals of Internal Medicine ; founding member, Word Association of Medical Editors (WAME); member, International Advisory Board, The Lancet

Suzanne W. Fletcher, MD : Professor Emerita of Population Medicine, Harvard Medical School; founding Co-Editor, Journal of General Internal Medicine ; former Co-Editor-in-Chief,  Annals of Internal Medicine ; National Academy of Medicine; former member, American Board of Internal Medicine; founding member, US Preventive Services Task Force

R Brian Haynes, OC, MD, PhD, FRCPC : Professor Emeritus of Clinical Epidemiology and Biostatistics; Professor of Medicine, McMaster University; co-founder, Evidence-Based Medicine movement; founder, Health Information Research Unit; founding Editor, ACP Journal Club ; lead developer of the structured abstract

Anne Holbrook, MD, PharmD, MSc, FRCPC : Professor, Department of Medicine, and Director, Division of Clinical Pharmacology & Toxicology, McMaster University; leading Canadian drug policy advisor and research lead for evidence-based therapeutics

Eileen K Hutton, RM, PhD, DSc (HC) : Professor Emerita and former Assistant Dean, Faculty of Health Sciences, and former Director of Midwifery, McMaster University; Professor of Midwifery Science, Vrije University, Amsterdam; and Fellow, Canadian Academy of Health Sciences

Alfonso Iorio, MD, PhD, FRCPC : Professor, Department of Health Research Methods, Evidence and Impact; Bayer Chair for Clinical Epidemiology Research and Bleeding Disorders; Chief, Health Information Research Unit and Hamilton-Niagara Hemophilia Program, McMaster University

Richard L. Kravitz, MD, MSPH : Professor, Internal Medicine; Former Director, Center for Health Services Research in Primary Care, University of California, Davis; former co-Editor-in-Chief, Journal of General Internal Medicine ; Director, UC Center Sacramento, a program providing leadership training in politics and relevant evidence for policymakers

José Florencio F. Lapeña Jr., MA, MD, FPCS : Professor of Otolaryngology; former Vice-Chancellor, University of the Philippines; Editor-in-Chief, Philippine Journal of Otolaryngology Head and Neck Surgery ; Charter President, Philippine Association of Medical Journal Editors; Past President, Asia Pacific Association of Medical Journal Editors (APAME); Secretary and Past Director, World Association of Medical Editors (WAME)

Maria del Carmen Ruiz-Alcocer, MD : Senior Medical Editor, Intersistemas Publishers; Former President, Mexican Association of Biomedical Journal Editors (AMERBAC); Past Director, World Association of Medical Editors (WAME); member, European Association of Science Editors (EASE)

Roberta Scherer, PhD : Senior Scientist, Clinical Trials and Evidence Synthesis, Johns Hopkins Bloomberg School of Public Health; former Associate Director, USA Cochrane Center; Adjunct Assistant Professor, Epidemiology & Public Health, University of Maryland School of Medicine

Christopher H. Schmid, PhD : Professor and Chair of Biostatistics and founding member and former Co-Director of the Center for Evidence Synthesis in Health in the Brown University School of Public Health; founding Co-Editor of Research Synthesis Methods ; helped develop Institute of Medicine national standards for systematic reviews

Thomas A. Trikalinos, MD : Associate Professor of Health Services, Policy, and Practice; Director, Center for Evidence Synthesis in Health, School of Public Health, Brown University

Junmin Zhang, MD, PhD : Professor and Managing Director, Journal of Capital Medical University , Medical Education Management , Journal of Translational Neuroscience , Capital Medical University, Beijing, China

Availability of data and materials

Not applicable

Devereaux PJ, Manns BJ, Ghali WA, et al. Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials. JAMA. 2001;285:2000–3. https://doi.org/10.1001/jama.285.15.2000 .

Article   CAS   PubMed   Google Scholar  

Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002;359(9307):696–700. https://doi.org/10.1016/S0140-6736(02)07816-9 .

Article   PubMed   Google Scholar  

Haahr MT, Hróbjartsson A. Who is blinded in randomized clinical trials? A study of 200 trials and a survey of authors. Clin Trials. 2006;3(4):360–5. https://doi.org/10.1177/1740774506069153 .

Hróbjartsson A, Boutron I. Blinding in randomized clinical trials: imposed impartiality. Clin Pharmacol Ther. 2011;90(5):732–6. https://doi.org/10.1038/clpt.2011.207 Epub 2011 Oct 12.

Schulz KF, Chalmers I, Altman DG. The landscape and lexicon of blinding in randomized trials. Ann Intern Med. 2002;136:254–9. https://doi.org/10.1177/1740774506069153 .

Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg 2012;10(1):28–55. DOI: https://doi.org/10.1016/j.ijsu.2011.10.001 Epub 2011 Oct 12. Review.

Miller LE, Stewart ME. The blind leading the blind: use and misuse of blinding in randomized controlled trials. Contemp Clin Trials. 2011;32:240–3. https://doi.org/10.1016/j.cct.2010.11.004 .

Schulz KF, Chalmers I, Altman DG, et al. Allocation concealment: the evolution and adoption of a methodological term https://www.jameslindlibrary.org/articles/allocation-concealment-evolution-adoption-methodological-term/ . https://doi.org/10.1177/0141076818776604 .

Montori VM, Bhandari M, Devereaux PJ, et al. In the dark: the reporting of blinding status in randomized controlled trials. J Clin Epidemiol. 2002;55:787–90. https://doi.org/10.1016/s0895-4356(02)00446-8 .

Viergever RF, Ghersi D. Information on blinding in registered records of clinical trials. Trials. 2012;13:210. https://doi.org/10.1186/1745-6215-13-210 .

Article   PubMed   PubMed Central   Google Scholar  

Devereaux PJ, Bhandari M, Montori VM, et al. Double blind, you have been voted off the island! Evid Based Ment Health. 2002;5(2):36–7. 12026889 .

Article   CAS   Google Scholar  

Chan A-W, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 Statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013;158:200–7 PMCID: PMC5114122.

Article   Google Scholar  

Karanicolas PJ, Farrokhyar F, Bhandari M. Practical tips for surgical research blinding: who, what, when, why, how? Can J Surg. 2010;53(5):345–8 PMCID: PMC2947122.

PubMed   PubMed Central   Google Scholar  

Park J, White AR, Stevinson C, Ernst E. Who are we blinding? A systematic review of blinded clinical trials. Perfusion. 2001;14:296–304.

Google Scholar  

Abdulraheem S, Lars BL. The reporting of blinding in orthodontic randomized controlled trials: where do we stand? Eur J Orthod. 2019:54–8. https://doi.org/10.1093/ejo/cjy021 .

Penić A, Begić D, Balajić K. Definitions of blinding in randomised controlled trials of interventions published in high-impact anaesthesiology journals: a methodological study and survey of authors. BMC Open. 2020;10:e035168. https://doi.org/10.1136/bmjopen-2019-035168 .

Gøtzsche PC. Blinding during data analysis and writing of manuscripts. Control Clin Trials. 1996;17:285–90. https://doi.org/10.1016/0197-2456(95)00263-4 .

Galvez-Olortegui JK, Gonzales-Saldaña J, Garcia-Gomez I, et al. Bias control in clinical trials: masking or blinding. Medwave. 2015;15(11):e6349. [Article in English, Spanish]. https://doi.org/10.5867/medwave.2015.11.6349 .

Indrayan A, Holt MP. Blinding, masking and concealment of allocation. In: Concise encyclopedia of biostatistics for medical professionals. Boca Ratan, Florida: Taylor & Francis Group. CRC Press; 2016. ISBN 13: 9781482243871. Available at https://kametthfq.updog.co/a2FtZXR0aGZxMTQ4MjI0Mzg3Mw.pdf . Accessed 12/11/2019.

Chapter   Google Scholar  

Antunes-Foschini R, Alves M, Silva PJ. Blinding or masking: which is more suitable for eye research? Arq Bras Oftalmol. 2019;82(5):V–VI. https://doi.org/10.5935/0004-2749.20190085 .

Lasagna L. The controlled trial: theory and practice. J Chronic Dis. 1955;1:353–67. https://doi.org/10.1016/0021-9681(55)90090-4 .

Schulz KF, Altman DG, Moher D. Blinding is better than masking. Response to Morris D, Fraser S, Wormald R. Masking is better than blinding. BMJ. 2007;334:799. https://doi.org/10.1016/0002-9343(50)90017-9 .

Morris D, Fraser S, Wormald R. Masking is better than blinding. BMJ. 2007;334:799. https://doi.org/10.1136/bmj.39175.503299.94 (Published 12 April 2007).

Article   PubMed Central   Google Scholar  

Lang T. Masking or blinding? An unscientific survey of mostly medical journal editors on the great debate. Med Gen Med. 2000;2:E25 PMID: 11104471.

CAS   Google Scholar  

Pandis N. Blinding or masking. Am J Orthod Dentofac Orthop. 2012;141:389–90. https://doi.org/10.1016/j.ajodo.2011.10.019 .

Boutron I, Estellat C, Guittet L, et al. Methods of blinding in reports of randomized controlled trials assessing pharmacologic treatments: a systematic review. PLoS Med. 2006;3(10):e425. Published online 2006 Oct 31. https://doi.org/10.1371/journal.pmed.0030425 .

Clifton L, Clifton DA. How to maintain the maximal level of blinding in randomisation for a placebo-controlled drug trial. Contemp Clin Trials Commun. 2019;14:100356. https://doi.org/10.1016/j.conctc.2019.100356 Published online 2019 Apr 9. PMCID: PMC6462539 PMID: 31011659.

Probst P, Zaschke S, Heger P, et al. Evidence-based recommendations for blinding in surgical trials. Langenbeck’s Arch Surg. 2019;404:273–84 https://link.springer.com/article/10.1007/s00423-019-01761-6 .

Park J. Suggesting an alternative to the term “double-blind”. Anesthesiology. 2002;96:1034. https://doi.org/10.1097/00000542-200204000-00044 .

Forder PM, Gebski VJ, Keech AC. Allocation concealment and blinding: when ignorance is bliss. Med J Australia. 2005;182(2):87–9 PMID: 15651970.

Schulz KF, Altman DG, Moher D, for the CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel. Ann Intern Med. 2010;152(11):726–32. https://doi.org/10.7326/0003-4819-152-11-201006010-00232 Epub 2010 Mar 24.

Wan M, Orlu-Gul M, Legay H, Tuleu C. Blinding in pharmacological trials: the devil is in the details. Arch Dis Child. 2013;98(9):656–659. PMCID: PMC3833301 https://doi.org/10.1136/archdischild-2013-304037 PMID: 23898156.

Sackett DL. Why we don’t test for blindness at the end of our trials. BMJ. 2004;328:1136. https://doi.org/10.1136/bmj.328.7448.1136-a .

Kaptchuk TJ. Intentional ignorance: a history of blind assessment and placebo controls in medicine. Bull Hist Med. 1998;72:389–433. https://doi.org/10.1353/bhm.1998.0159 .

Greiner T, Gold H, Cattel M, et al. A method for the evaluation of the effects of drugs on cardiac pain in patients with angina on effort. Am J Med. 1950;9:143–55. https://doi.org/10.1016/0002-9343(50)90017-9 .

Download references

The authors received no financial support for the research, authorship, or publication of this article.

Author information

Authors and affiliations.

West China Hospital/Sichuan Medical School Publishing Group, Kirkland, WA, USA

Thomas A. Lang

Data for Solutions, Inc., Decatur, GA, USA

Donna F. Stroup

You can also search for this author in PubMed   Google Scholar

Contributions

TAL conceived the idea and wrote the initial draft. DFS critically appraised various drafts. Both authors approved the submitted final manuscript.

Corresponding author

Correspondence to Thomas A. Lang .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Lang, T.A., Stroup, D.F. Who knew? The misleading specificity of “double-blind” and what to do about it. Trials 21 , 697 (2020). https://doi.org/10.1186/s13063-020-04607-5

Download citation

Received : 10 February 2020

Accepted : 13 July 2020

Published : 05 August 2020

DOI : https://doi.org/10.1186/s13063-020-04607-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Random assignment
  • Allocation concealment
  • Randomized trials
  • Surveillance bias
  • Expectation bias
  • Ascertainment bias
  • Trial reporting

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

assignment trial meaning

Weill Cornell Medicine

  • Weill Cornell Medicine

Joint Clinical Trials Office

Common Terms You Should Know When Enrolling in a Clinical Trial

Clinical trials, also known as clinical studies or clinical research, are studies that explore whether a medical strategy, treatment, or device is safe and effective for humans. When deciding whether to enroll in a clinical trial, you will likely encounter many terms related to clinical research and what the specific trial entails. Some terms are fairly standard across trials, regardless of the type of trial or what is being evaluated. When evaluating trial options, it’s important to understand what is involved with the clinical trial. This is also required before consenting to enroll on a trial (a term called “informed consent”). To help you navigate through the process, we’ve outlined common clinical trial terms that you should know. 

Adverse Event: Any undesirable experience associated with a drug or procedure, also sometimes described as a side effect or negative reaction. Adverse events can range from mild to severe. Serious adverse events are those that can cause temporary or permanent disability and may result in hospitalization or death.

Baseline Characteristics: Data collected at the beginning of a clinical study for all participants and for each arm or comparison group. These data include demographics, such as age, race, and gender, and any study-specific measures (e.g. systolic blood pressure, prior antidepressant treatment).

Blinding or Masking:  When those involved in the trial are not aware of the treatment assignments. There can be many different types of blinding. “Single Blind” means that the study participants do not know to which treatment group they have been assigned. “Double Blind” means that both the study participants and the investigators don’t know who has been assigned to each treatment group.

Compassionate Use: A method of providing experimental therapeutics prior to final regulatory agency (FDA) approval for use in humans. This procedure is used with very sick individuals who have no other treatment options available. Often, case-by-case approval must be obtained by the patient’s physician from the regulatory agency for “compassionate use” of an experimental drug or therapy. 

Confidentiality: This refers to the practice of maintaining private information related to clinical trial participants, including their personal identity and all personal medical information. Results from the study will usually be presented in terms of trends or overall findings and will not mention any participant names or reveal any identifying information without obtaining additional consent.

Control Group: The group of participants that receives standard of care treatment. The control group may also be comprised of healthy volunteers.

Eligibility Criteria: This refers to the factors or restrictions that determine who can participate in the clinical trial. This is different for every trial and can sometimes be referred to as the Inclusion Criteria and Exclusion Criteria.

Experimental Group: The group of participants in a study that receive the experimental treatment or study intervention.

First-In-Human Study: A clinical trial where a medical procedure or medicinal product that has been previously developed and assessed through laboratory model or animal testing is tested on human subjects for the first time.

Food and Drug Administration (FDA): An agency within the U.S. Department of Health and Human Services. The FDA is responsible for protecting the public health by making sure that human and veterinary drugs, vaccines and other biological products, medical devices, the Nation's food supply, cosmetics, dietary supplements, and products that give off radiation are safe and effective.

Informed Consent: When a participant provides informed consent, it means that he or she has learned the key facts about a research study, including the possible risks and benefits, and agrees to take part in it.

Intervention: The treatment, drug or procedure that is being studied in the clinical trial. This term is typically used when compared to a control or standard of care treatment arm. An “Interventional” trial is a term used to describe clinical trials studying a treatment, drug or procedure. This is different from an “Observational” study (see definition below).

Observational Study: In an observational study, investigators assess health outcomes in groups of participants according to a research plan or protocol. Participants may receive diagnostic or other types of interventions as part of their routine medical care, but the investigator does not assign participants to specific interventions or treatments.

Outcome Measure: A planned measurement described in the protocol that is used to determine the effect of interventions on participants in a clinical trial. For observational studies, a measurement or observation is used to describe patterns of diseases or traits, or associations with exposures, risk factors, or treatment.  

Phase: The categories that each clinical trial can fall into based on what properties of the treatment are being studied in the trial and how many participants are involved. There are typically four phases of a clinical trial. Phase I is the administration of a drug or device to a small group to identify possible side effects and determine proper dose. Phase II is done to gauge whether the treatment is effective while continuing to evaluate safety. Phase III compares a new drug or device against the current standard of care. Phase III trials have the potential to lead to Food and Drug Administration (FDA) approvals. Finally, phase IV trials are done after FDA approvals. Sometimes the FDA will require additional safety information to be collected after approval. Phase IV trials are often referred to as “post-market surveillance,” which look to identify problems that were not observed or recognized before approval.

Placebo: A substance that does not contain active ingredients and is made to be physically indistinguishable from the actual drug being studied.

Protocol: The written description of the clinical trial.

Principal Investigator (PI): A medical professional who leads the conduct of a clinical trial at a study site. This person is the lead researcher for the project. The phrase is also often used as a synonym for “head of the laboratory” or “research group leader.”

Sponsor: The sponsor is the organization or person who oversees the clinical study and is responsible for analyzing the study data. Often, the sponsor will also provide financial support for the trial.

Subject: Any participant in a study.

Treatment Arm: A group or subgroup of participants in a clinical trial. Each group receives a specific intervention, study drug dose, or sometimes no intervention, according to the study protocol.

Randomization: The process in which study participants are randomly assigned to different treatment groups. This is to ensure that everybody has an equal chance of being a part of each treatment or control group.

View Weill Cornell Medicine/NewYork-Presbyterian's Open Clinical Trials

Researcher's Toolbox

Contact Information

Joint Clinical Trials Office Weill Cornell Medicine / NewYork-Presbyterian 1300 York Avenue, Box 305 New York, NY 10065 Phone: (646) 962-8215 Fax: (646) 962-0536

Abbreviation Library

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Key Concepts of Clinical Trials: A Narrative Review

Craig a. umscheid.

1 Center for Evidence-Based Practice, University of Pennsylvania, Philadelphia, Pa

4 Leonard Davis institute of Health Economics, University of Pennsylvania, Philadelphia, PA

David J. Margolis

2 Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia, PA

5 Department of Dermatology, University of Pennsylvania, Philadelphia, PA

Craig E. Grossman

6 Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA

The recent focus of federal funding on comparative effectiveness research underscores the importance of clinical trials in the practice of evidence-based medicine and health care reform. The impact of clinical trials not only extends to the individual patient by establishing a broader selection of effective therapies, but also to society as a whole by enhancing the value of health care provided. However, clinical trials also have the potential to pose unknown risks to their participants, and biased knowledge extracted from flawed clinical trials may lead to the inadvertent harm of patients. Although conducting a well-designed clinical trial may appear straightforward, it is founded on rigorous methodology and oversight governed by key ethical principles. In this review, we provide an overview of the ethical foundations of trial design, trial oversight, and the process of obtaining approval of a therapeutic, from its pre-clinical phase to post-marketing surveillance. This narrative review is based on a course in clinical trials developed by one of the authors (DJM), and is supplemented by a PubMed search predating January 2011 using the keywords “randomized controlled trial,” “patient/clinical research,” “ethics,” “phase IV,” “data and safety monitoring board,” and “surrogate endpoint.” With an understanding of the key principles in designing and implementing clinical trials, health care providers can partner with the pharmaceutical industry and regulatory bodies to effectively compare medical therapies and thereby meet one of the essential goals of health care reform.

Introduction

The explosion in health care costs in the United States has recently spurred large federal investments in health care to identify the medical treatments of highest value. Specifically, $1.1 billion has been appropriated by the American Recovery and Reinvestment Act of 2009 for “comparative effectiveness” research to evaluate “…clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat diseases, disorders, and other health conditions.” 1 Although numerous study designs can address these goals, clinical trials (and specifically randomized controlled trials [RCTs]) remain the benchmark for comparing disease interventions. However, the implementation of clinical trials involves a rigorous approach founded on scientific, statistical, ethical, and legal considerations. Thus, it is crucial for health care providers to understand the precepts on which well-performed clinical trials rest in order to maintain a partnership with patients and industry in pursuit of the safest, and most effective and efficient therapies. We present key concepts as well as the dilemmas encountered in the successful design and execution of a clinical trial.

Materials and Methods

This narrative review is based on a course in clinical trials developed by one of the authors (DJM), and is supplemented by a PubMed search predating January 2011 using the keywords “randomized controlled trial,” “patient/clinical research,” “ethics,” “phase IV,” “data and safety monitoring board,” and “surrogate endpoint.”

The Ethical Foundation of Clinical Trials

Despite the first reported modern clinical trial described in James Lind’s “A Treatise of the Scurvy” from 1753, it was not until the mid-20th century that ethical considerations in human research were addressed. In response to the criminal medical experimentation of human subjects by the Nazis during World War II, 10 basic principles of human research were formulated as the Nuremberg Code of 1949. 2 This code was later extended globally as The Declaration of Helsinki and adopted by the World Medical Association in 1964. 3 Notably, it advanced the ethical principle of “clinical equipoise,” a phrase later coined in 1987 to describe the expert medical community’s uncertainty regarding the comparative efficacy between treatments studied in a clinical trial. 4 This ethical precept guides the clinical investigator in executing comparative trials without violating the Hippocratic Oath.

Further advancement of the principles of respect for persons, beneficence (to act in the best interest of the patient), and justice emerged in the 1979 Belmont Report, 5 which was commissioned by the US government in reaction to the Tuskegee syphilis experiment. 6 This report applied these concepts to the processes of informed consent, assessment of risks and benefits, and equitable selection of subjects for research. Importantly, the boundaries between clinical practice and research were clarified, distinguishing activities between “physicians and their patients” from those of “investigators and their subjects.” Here, research was clearly defined as “an activity designed to test a hypothesis…to develop or contribute to generalizable knowledge.” 5

It was the Belmont Report that finally explicated the principle of informed consent proposed 30 years prior in the Nuremberg Code. Informed consent, now a mandatory component of clinical trials that must be signed by all study participants (with few exceptions), must clearly state: 7

  • This is a research study (including an explanation of the purpose and duration; and the risks, benefits, and alternatives of the intervention)
  • Participation is voluntary
  • The extent to which confidentiality will be maintained
  • ontact information for questions or concerns

Interestingly, this elemental safeguard in patient research is not without flaws. In reality, the investigator has limited information regarding the risks and benefits of an intervention because this is paradoxically the objective of performing the study. Challenges still remain with exercising informed consent, as illustrated by study participant comprehension deficiencies and self-reported dissatisfaction with the process. 8 This has prompted explorations to improve participant understanding of consent documents and procedures. 9

In 1991, the ethical principles from these seminal works were culminated into Title 45, Part 46 of the Code of Federal Regulations, titled “Protection of Human Subjects.” 7 Referred to as the “Common Rule,” it regulates all federally supported or conducted research, with additional protections for prisoners, pregnant women, children, neonates, and fetuses.

Overview of Trial Design

Clinical trials, in their purest form, are designed to observe outcomes of human subjects under “experimental” conditions controlled by the scientist. This is contrasted to noninterventional study designs (ie, cohort and case-control studies), in which the investigator measures but does not influence the exposure of interest. A clinical trial design is often favored because it permits randomization of the intervention, thereby effectively removing the selection bias that results from the imbalance of unknown/immeasurable confounders. Within this inherent strength is the capacity to unveil causality in an RCT. Randomized clinical trials, however, still remain subject to limitations such as misclassification or information bias of the outcome or exposure, co-interventions (where one arm receives an additional intervention more frequently than another), and contamination (where a proportion of subjects assigned to the control arm receive the intervention outside of the study).

Execution of a robust clinical trial requires the selection of an appropriate study population. Despite all participants voluntarily consenting for the intervention, the enrolled cohort may potentially differ from the general population from which they were drawn. This type of selection bias, called “volunteer bias,” may arise from such factors as study eligibility criteria, inherent subject attributes (eg, geographic distance from the study site, health status, attitudes and beliefs, education, and socioeconomic status), or subjective exclusion by the investigator because of poor anticipated enrollee compliance or overall prognosis. 10 Although RCTs seek to achieve internal validity by enrolling a relatively homogeneous population according to predefined characteristics, narrow inclusion and exclusion criteria may limit their external validity (or “generalizability”) to a broader population of patients with highly prevalent comorbidities that may not be included in the sample cohort. This theme underscores why an experimental treatment’s “efficacy” (ie, a measure of the success of an intervention in an artificial setting) may not translate into its “effectiveness” (ie, a measure of its value applied in the “real world”). Attempts to improve patient recruitment and generalizability using free medical care, financial payments, 11 and improved communication techniques 12 are considered ethical as long as the incentives are not unduly coercive. 13

In order to assess the efficacy of an intervention within the context of a clinical trial, there must be deliberate control of all known confounding variables (including comorbidities), thereby requiring a homogeneous group of participants. However, the evidence provided by a well-designed and executed clinical trial will have no value if it cannot be applied to the general population. Thus, designers of clinical trials must use subjective judgment (including clinical, epidemiological, and biostatistical reasoning) to determine at the outset how much trade-off they are willing to make between the internal validity and generalizability of a clinical trial.

A “surrogate endpoint” is often chosen in place of a primary endpoint to enhance study efficiency (ie, less cost and time, improved measurability, and smaller sample size requirement). Ideally, the surrogate should completely capture the effect of the intervention on the clinical endpoint, as formally proposed by Prentice. 14 Blood pressure is a well established surrogate for cardiovascular-related mortality because its normalization has been associated with clinically beneficial outcomes, such as fewer strokes, and less renal and cardiac complications. 15 However, one must use caution when relying on surrogates, as they may be erroneously implicated in the direct causal pathway between intervention and true outcome. 16 , 17 A frequently described, clinically logical, but flawed use of a surrogate endpoint was premature ventricular contractions (PVCs) to assess whether antiarrhythmic drugs reduced the incidence of sudden death after a myocardial infarction in the Cardiac Arrhythmia Suppression Trial (CAST). Despite evidence of the association between PVCs and early arrhythmic mortality, pharmacologic suppression of PVCs unexpectedly increased the very event (mortality) that it was supposed to remedy. 18 As surrogates are commonly employed in phase I–II trials, it is highly likely that a high proportion of clinically effective therapeutics are discarded because of false-negative results using such endpoints. This is exemplified in the trial by the International Chronic Granulomatous Disease (CGD) study group, in which the surrogate markers of superoxide production and bactericidal efficiency were initially applied to assess the efficacy of interferon-γ for treatment of CGD. 19 For reasons outside the scope of this review, the authors decided a priori to extend the study duration in order to adequately detect the clinical endpoint of interest (recurrent serious infections) instead of the originally proposed surrogate markers (superoxide production and bactericidal efficiency). Treatment with interferon-γ was incredibly successful, as the rate of recurrent serious infections was highly reduced. However, there was no observable effect on superoxide production and bactericidal activity. Had the primary endpoint not been changed, the originally proposed surrogate biomarkers would have masked the clinically relevant efficacy of this treatment. These examples illustrate the importance of validating surrogates as reliable predictors of clinical endpoints using meta-analyses and/or natural history studies of large population cohorts, in conjunction with ensuring biological plausibility. 20

For a trial to adequately address the “primary question(s)” of interest, a sufficient sample size is required to have enough power to detect a potential statistical difference. Traditionally, power is defined as having at least an 80% chance of finding a statistically significant difference between the outcomes of 2 interventions when a clinically meaningful difference exists. The outcomes or endpoints of the investigation, whether objective (eg, death) or subjective (eg, quality of life), must always be reliable and meaningful measures. Statistical analyses commonly used to analyze outcomes include logistic regression for dichotomous endpoints (eg, event occurred/did not occur), Poisson regression for rates (eg, number of events per person-years), Cox regression for time-to-events (eg, survival analysis), and linear regression for continuous measures (eg, weight).

Overview of Drug Development

The general road to drug development and approval has been defined and regulated by the US Food and Drug Administration (FDA) for decades. Safety has historically been its primary focus, followed by efficacy. If a drug appears promising in pre-clinical studies, a drug sponsor or sponsor-investigator can submit an investigational new drug (IND) application. This detailed proposal contains investigator qualifications and all pre-clinical drug information and data, and a request for exemption from the federal statutes that prohibit interstate transport of unapproved drugs. After approval, the drug is studied (phase I–III trials, described below) and if demonstrated safe and efficacious in the intended population, the drug sponsor can then submit a New Drug Application (NDA) to the FDA. After an extensive review by the FDA that often involves a recommendation by an external committee, the FDA determines whether the therapeutic can be granted an indication and marketed. After final approval, the drug can continue to be studied in phase IV trials, in which safety and effectiveness for the indicated population is monitored. To facilitate evaluation and endorsement of foreign drug data, efforts have been made to harmonize this approval process across the United States, Europe, and Japan through the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). 21

Pre-Clinical, Phase I, and Phase II Trials

Pre-clinical investigations include animal studies and evaluations of drug production and purity. Animal studies explore: 1) the drug’s safety in doses equivalent to approximated human exposures, 2) pharmacodynamics (ie, mechanisms of action, and the relationship between drug levels and clinical response), and 3) pharmacokinetics (ie, drug absorption, distribution, metabolism, excretion, and potential drug–drug interactions). This data must be submitted for IND approval if the drug is to be further studied in human subjects.

Because the FDA emphasizes “safety first,” it is logical that the first of 4 stages (known as “phases”) of a clinical trial is designed to test the safety and maximum tolerated dose (MTD) of a drug, human pharmacokinetics and pharmacodynamics, and drug–drug interactions. These phase I trials (synonymous with “dose-escalation” or “human pharmacology” studies) are the first instance in which the new investigational agent is studied in humans, and are usually performed open label and in a small number of “healthy” and/or “diseased” volunteers. The MTD, or the drug dose before a dose-limiting toxicity, can be determined using various statistical designs. Dose escalation is based on very strict criteria, and subjects are closely followed for evidence of drug toxicity over a sufficient period. There is a risk that subjects who volunteer (or the actual physicians who enroll patients) for phase I studies will misinterpret its objective as therapeutic. For example, despite strong evidence that objective response rates in phase I trials of chemotherapeutic drugs is exceedingly low (as low as 2.5%), 22 patients may still have a “therapeutic misconception” of potentially receiving a direct medical benefit from trial participation. 23 Improvements to the process of informed consent 9 could help dispel some of these misconceptions while still maintaining adequate enrollment numbers.

Phase II trials, also referred to as “therapeutic exploratory” trials, are usually larger than phase I studies, and are conducted in a small number of volunteers who have the disease of interest. They are designed to test safety, pharmacokinetics, and pharmacodynamics, but may also be designed to answer questions essential to the planning of phase III trials, including determination of optimal doses, dose frequencies, administration routes, and endpoints. In addition, they may offer preliminary evidence of drug efficacy by: 1) comparing the study drug with “historical controls” from published case series or trials that established the efficacy of standard therapies, 2) examining different dosing arms within the trial, or 3) randomizing subjects to different arms (such as a control arm). However, the small number of participants and primary safety concerns within a phase II trial usually limit its power to establish efficacy, and thereby supports the necessity of a subsequent phase III trial.

At the conclusion of the initial trial phases, a meeting between the sponsor(s), investigator(s), and FDA may occur to review the preliminary data, IND, and ascertain the viability of progressing further to a phase III trial (including plans for trial design, size, outcomes, safety concerns, analyses, data collection, and case report forms). Manufacturing concerns may also be discussed at this time.

Phase III Trials

Based on prior studies demonstrating drug safety and potential efficacy, a phase III trial (also referred to as a “therapeutic confirmatory,” “comparative efficacy,” or “pivotal trial”) may be pursued. This stage of drug assessment is conducted in a larger and often more diverse target population in order to demonstrate and/or confirm efficacy and to identify and estimate the incidence of common adverse reactions. However, given that phase III trials are usually no larger than 300 to 3000 subjects, they consequently have the statistical power to establish an adverse event rate of no less than 1 in 100 persons (based on Hanley’s “Rule of 3”). 24 This highlights the significance of phase IV trials in identifying less-common adverse drug reactions, and is one reason why the FDA usually requires more than one phase III trial to establish drug safety and efficacy.

The most common type of phase III trials, comparative efficacy trials (often referred to as “superiority” or “placebo-controlled trials”), compare the intervention of interest with either a standard therapy or a placebo. Even in the best-designed placebo-controlled studies, it is not uncommon to demonstrate a placebo effect, in which subjects exposed to the inert substance exhibit an unexpected improvement in outcomes when compared with historical controls. While some attribute the placebo effect to a general improvement in care imparted to subjects in a trial, others argue that those who volunteer for a study are acutely symptomatic and will naturally improve or “regress to the mean” as the trial progresses. This further highlights the uniqueness of study participants and why a trial may lack external validity. The application of placebos, including surgical placebos (“sham procedures”), 25 - 27 has ignited some debate; the revised Declaration of Helsinki supports comparative efficacy trials by discouraging the use of drug placebos in favor of “best current” treatment controls. 28 , 29

Another type of phase III trial, the equivalency trial (or “positive-control study”), is designed to ascertain whether the experimental treatment is similar to the chosen comparator within some margin prespecified by the investigator. Hence, a placebo is almost never included in this study design. As long as the differences between the intervention and the comparator remain within the prespecified margin, the intervention will be deemed equivalent to the comparator. 30 Although the prespecified margin is often based on external evidence, statistical foundations, and clinical experience, there remains little guidance for setting acceptable margins. A variant of the equivalency trial, the noninferiority study, is conducted with the goal of excluding the possibility that the experimental intervention is less effective than the standard treatment by some prespecified magnitude. One must be cautious when interpreting the results of all types of equivalency trials because they are often incorrectly designed and analyzed as if they were comparative efficacy studies. Such flaws can result in a bias towards the null, which would translate into a false-negative result in a comparative efficacy study, but a false-positive result in an equivalency trial. Of note, the noninferiority trial is more susceptible to false-positive results than other study designs. 31

A hallmark of the phase III trial design is the balance in treatment allocation for comparison of treatment efficacy. Implemented through randomization, this modern clinical trial practice attempts to eliminate imbalance of confounders and/or any systematic differences (or biases) between treatment groups. The statistical tool of randomization, first introduced into clinical trials by Sir Austin Bradford Hill, 32 was born out of the necessity (and ethical justification) of rationing limited supplies of streptomycin in a British trial of pulmonary tuberculosis. 33 The most basic randomization model, simple randomization, randomly allocates each subject to a trial arm regardless of those already assigned (ie, a “coin flip” for each subject). Although easy to perform, major imbalances in treatment assignments or distribution of covariates can ensue, making this strategy less than ideal. To improve on this method, a constraint can be placed on randomization that forces the number of subjects randomly assigned per arm to be equal and balanced after a specified block size (“block randomization”). For example, in a trial with 2 arms, a block size of 4 subjects would be designated as 2 positions in arm A and 2 positions in arm B. Even though the positions would be randomly assigned within the block of 4 subjects, it would be guaranteed that, after randomization of 4 subjects, 2 subjects would be in arm A and 2 subjects would be in arm B ( Table 1 ). The main drawback of applying a fixed-block allocation is that small block sizes can allow investigators to predict the treatment of the next patient, resulting in “unblinding.” For example, if a trial has a block size of 2, and the first subject in the block was randomized to treatment “A,” then the investigator will know that the next subject will be randomized to “the other” treatment. Variable block sizes can help prevent this unblinding (eg, a block size of 4 followed by a block size of 8 followed by a block size of 6).

Permutations of a 4-Block Randomization Scheme in a 2-Arm Study with 24 Subjects

Another feature of phase III trial design is stratification, which is commonly employed in combination with randomization to further balance study arms based on prespecified characteristics (rather than size in the case of blocking). Stratification facilitates analysis by ensuring that specific prognostic factors of presumed clinical importance are properly balanced in the arms of a clinical trial. Stratification of a relatively small sample size that has also undergone block randomization may result in loss of the originally intended balance, thereby supporting the merits of alternative techniques such as minimization or dynamic allocation, designed to reduce imbalances among multiple strata and study arms. 34

Often, the phase III trial design dictates that the interventions be “blinded” (or masked) in an effort to minimize assessment bias of subjective outcomes. Specific blinding strategies to curtail this “information bias” include “single blinding” (subject only), “double blinding” (both subject and investigator), or “triple blinding” (data analyst, subject, and investigator). Unfortunately, not all trials can be blinded (eg, method of drug delivery cannot be blinded), and the development of established drug toxicities may lead to inadvertent unmasking and raise ethical and safety issues. When appropriate, additional strategies can be applied to enhance study efficiency, such as assigning each subject to serve as his/her own control (crossover study) or evaluating more than one treatment simultaneously (factorial design).

The most common approach to analyzing phase III trials is the intention-to-treat analysis, in which subjects are assessed based on the intervention arm to which they were randomized, regardless of what treatment they actually received. This is commonly known as the “analyzed as randomized” rule. A complementary or secondary analysis is an “as-treated” or “per-protocol” analysis, in which subjects are evaluated based on the treatment they actually received, regardless of whether they were randomized to that treatment arm. Intention-to-treat analyses are preferable for the primary analysis of RCTs, 35 as they eliminate selection bias by preserving randomization; any difference in outcomes can therefore be attributed to the treatment alone and not confounders. In contrast, an “as-treated” or “per-protocol” approach may eliminate any benefit of random treatment selection in an interventional trial, as it estimates the effect of treatment received. The study thereby becomes similar to an interventional cohort study with the potential for treatment selection bias. If adherence in the treatment arm is poor and contamination in the control group is high, an intention-to-treat analysis may fail to show a difference in outcomes. This is in contrast to a per-protocol analysis that takes into account these protocol violations.

Based on the vast combination of strategies applicable to the design of a phase III study, the Consolidated Standards of Reporting Trials (CONSORT) guideline was established to improve the quality of trial reporting and assist with evaluating the conduct and validity of trials and their results. 36 Employing a flow diagram ( Figure 1 ) and a 22-item checklist ( Table 2 ), 37 readers can easily identify the stages in which subjects withdraw from a study (eg, found to be ineligible, lost to follow-up, cannot be evaluated for the primary endpoint). Because exclusion of such missing data can reduce study power and lead to bias, the best way to avoid these challenges is to adhere to the CONSORT checklist, thereby enrolling only eligible patients and ensuring that they remain on-study.

An external file that holds a picture, illustration, etc.
Object name is nihms-350738-f0001.jpg

CONSORT 2010 flow diagram.

Reproduced with permission from BMJ . 37

CONSORT 2010 Checklist of Information to Include When Reporting a Randomized Trial

Phase IV Trials

Once a drug is approved, the FDA may require that a sponsor conduct a phase IV trial as a stipulation for drug approval, although the literature suggests that less than half of such studies are actually completed or even initiated by sponsors. 38 Phase IV trials, also referred to as “therapeutic use” or “post-marketing” studies, are observational studies performed on FDA-approved drugs to: 1) identify less common adverse reactions, and 2) evaluate cost and/or drug effectiveness in diseases, populations, or doses similar to or markedly different from the original study population. Limitations of pre-marketing (eg, phase III) studies become apparent with the statistic that roughly 20% of drugs acquire new black box warnings post-marketing, and approximately 4% of drugs are ultimately withdrawn for safety reasons. 39 , 40 As described by one pharmacoepidemiologist, “this reflects a deliberate societal decision to balance delays in access to new drugs with delays in information about rare adverse reactions.” 41

Over the past decade, there has been a steady rise in voluntarily and spontaneously reported serious adverse drug reactions submitted to the FDA’s MedWatch program, from 150 000 in 2000 to 370 000 in 2009. 42 Reports are submitted directly by physicians and consumers, or indirectly via drug manufacturers (the most common route). Weaknesses of this post-marketing surveillance are illustrated by recent failures to quickly detect serious cardiovascular events resulting from the use of the anti-inflammatory medication Vioxx® and prescription diet drug Meridia®. It was only after the European SCOUT (Sibutramine Cardiovascular OUTcome Trial) study, driven by anecdotal case reports concerning cardiovascular safety, that the FDA withdrew Meridia® from the market in late 2010. 43 The most common criticisms of the FDA’s post-marketing surveillance are: 1) the reliance on voluntary reporting of adverse events, resulting in difficulty calculating adverse event rates because of incomplete data on total events and unreliable information on the true extent of exposures; 2) the trust in drug manufacturers to collect, evaluate, and report drug safety data that may risk their financial interests; and 3) the dependence on one government body to approve a drug and then actively seek evidence that might lead to its withdrawal. 38 , 41 Proposed solutions include the establishment of a national health data network to oversee post-marketing surveillance independent of the FDA-approval process, 44 preplanned meta-analyses of a series of related trials to assess less-common adverse events, 45 and large-scale simple RCTs with few eligibility and treatment criteria (ie, Peto studies). 46

Clinical Trial Oversight

Historic abuses and modern day tragedies highlight the importance of Institutional Review Boards (IRBs) and Data and Safety Monitoring Boards (DSMBs) in ensuring that human research conforms to local and national standards of safety and ethics. 47 , 48 Under the Department of Health and Human Services Title 45 Part 46 of the Code of Federal Regulations (CFR), IRBs are charged with protecting the rights and welfare of human subjects involved in research conducted or supported by any federal department or agency. 7 In order to ensure compliance with the strict and detailed guidelines of the CFR, members of IRBs (one of whom must be a non-scientist, and one of whom must be independent of the board’s home institution) are authorized under the “Common Rule” to approve, require modification to, or reject a research activity. Based on the perceived risk of the study, IRBs have a number of levels of review from exempt for “minimal risk” studies (defined by the “Common Rule” as risks that are no greater than those encountered in daily life or routine clinical examinations or tests) to the more lengthy and involved full board reviews for higher-risk studies. General criteria for IRB approval include: 1) risks to subjects are minimized, and are reasonable in relation to benefits; 2) selection of subjects is equitable; 3) informed consent is sought; 4) sufficient provisions for data monitoring exist to maintain subjects’ safety; 5) adequate mechanisms are in place to ensure subject confidentiality; and 6) rights and welfare of vulnerable populations are protected. 7

Data and Safety Monitoring Boards, also referred to as “data safety committees” or “data monitoring committees,” are often required by IRBs for study approval, and are charged with: 1) safeguarding the interests of study subjects; 2) preserving the integrity and credibility of the trial in order that future patients may be treated optimally; and 3) ensuring that definitive and reliable trial results be made available in a timely fashion to the medical community. 49 Specific responsibilities include monitoring data quality, study conduct (including recruitment rates, retention rates, and treatment compliance), drug safety, and drug efficacy. Data and Safety Monitoring Boards are usually organized by the trial sponsor and principal investigator, and are often comprised of biostatisticians, ethicists, and physicians from relevant specialties, among others. Outcomes from DSMB activities include: 1) extension of recruitment strategies if the study is not meeting enrollment goals; 2) changes in study entry criteria, procedures, treatments or study design; and 3) early closure of the study because of safety issues (external or internal), slow recruitment rates, poor compliance with the study protocol, or clinically significant differences in drug efficacy or toxicity between trial arms. To highlight the important charge of DSMBs and their relevance outside of the scientific community, there have been egregious breaches of confidentiality by DSMB members who have leaked confidential drug information to Wall Street firms for self-profit. 48 Hence, members of DSMBs ideally should be free of significant conflicts of interest, and should be the only individuals to whom the data analysis center provides real-time results of treatment efficacy and safety.

The complexity and expense of monitoring human research has prompted the establishment of Contract Research Organizations (CROs) to oversee clinical trials. They are commonly commercial or academic organizations hired by the study sponsor “to perform one or more of a sponsor’s trial-related duties and functions,” such as organizing and managing a DSMB, or managing and auditing trial data to maintain data quality. 50

To offer patients the most effective and safest therapies possible, it is important to understand the key concepts involved in performing clinical trials. The attention by the mass media to safety-based drug withdrawal (amounting to approximately 1.5 drugs per year since 1993 51 ) emphasizes this point. Understanding the ethical precepts and regulations behind trial designs may also help key stakeholders respond to future research dilemmas at home and abroad. Moreover, well-designed and executed clinical trials can contribute significantly to the national effort to improve the effectiveness and efficiency of health care in the United States. Through rigorous practices applied to novel drug development and approval, physicians and patients can maintain confidence in the therapies prescribed.

Take-Home Points

  • To ensure the safety of subjects who volunteer for clinical trials as well as preserving the integrity and credibility of the data reported, numerous regulatory boards including IRBs and DSMBs under the auspices of the federal government are involved with all studies conducted in the United States.
  • The rigorous methodology of executing a clinical trial, most significantly through the controlled and random intervention of human volunteers by the investigator, makes this epidemiologic study design one of the most powerful approaches to demonstrating causal associations in the practice of evidence-based medicine.
  • The internal validity that results from the narrowly selective enrollment criteria and artificial setting within a clinical trial must be balanced with the intent of translating the study findings to the “real world” in clinical practice (known as generalizability or external validity).
  • Enrollment and treatment allocation techniques, selected endpoints, methods of comparison, and statistical analyses must be carefully chosen in order to plausibly achieve the intended goals of the study.
  • Modern clinical trials are founded on numerous and continually evolving ethical principles and practices that guide the investigator in performing human research without violation of the Hippocratic Oath.
  • Emphasizing safety first, the most common route of studying a new therapeutic is from the establishment of the maximum tolerated dose in humans (phase I), to pharmacodynamic and pharmacokinetic studies, and exploration of therapeutic benefit (phase II), followed by comparing its efficacy to an established therapeutic or control in a larger population of volunteers (phase III), and ultimately post-market evaluation of adverse reactions and effectiveness when administered to the general population (phase IV).

Acknowledgments

We gratefully acknowledge Rosemarie Mick, MS, for reviewing the statistical content of this manuscript. This work was supported in part by the National Institutes of Health (T32-HP-010026 [CAU] and T32-CA-009677 [CEG]). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

Conflict of Interest Statement

Craig A. Umscheid, MD, MSCE, David J. Margolis, MD, PhD, MSCE, and Craig E. Grossman, MD, PhD disclose no conflicts of interest.

Your session is about to expire

Clinical trial basics: intervention models in clinical trials, what are intervention models in clinical trials.

In clinical trials, intervention model refers to the general structure used for dividing study participants into groups to compare outcomes. These groups are also known as interventional or treatment arms.

What are the different types of intervention models in clinical trials?

Intervention models generally fall under four types: single-group assignments, parallel assignments, cross-over assignments, and factorial assignments.[ 1 ]

The model that is most appropriate for a trial depends on several factors, such as:

  • The medical condition being tested
  • The research goals of the trial
  • The availability of eligible participants

Single group assignment

In single-group assignment, participants are not divided into groups at all. Instead, all participants are assigned to the therapy arm of the trial and receive the same treatment, therapy, or drug, with the same route of administration, dosage, and frequency.

An example of a single-group study is a phase 4 clinical study observing the long-term effects of a newly approved drug in all participants enrolled in the study.

Parallel assignment

Parallel assignment is the most common type of intervention model used in clinical research, wherein trial participants are divided into two or more groups, each receiving a different medical intervention throughout the duration of the study.[ 2 ] Participants are given one type of treatment, remaining in the same treatment arm for the entire study, so such studies are also known as non-crossover studies.

An example of a clinical study that uses parallel assignment is a phase III clinical trial comparing the investigational product (drug X) against the standard treatment (drug A) for the condition:

  • Group 1 (experimental treatment arm) receives drug X
  • Group 2 (standard treatment control arm) receives drug A

Different dosages of the same drug can also be studied in a parallel group study, for example:

  • Group 1 receives 50 mg of drug X
  • Group 2 receives 100 mg of drug X

Cross-over assignment

In a cross-over assignment design, researchers divide trial participants into groups that receive the same experimental treatment(s) but at different times. In other words, participants are switched from one study arm to the other at a given point in time. Sometimes also referred to as a cross-over longitudinal study, this type of intervention model attempts to reduce patient variation for more accurate results.[ 3 ] It may also have ethical benefits as all participants are given a chance to benefit from the investigational treatment, which may further encourage patient enrollment and retention.

To reduce any carryover effect from the previous treatments, studies conducted under this model usually include a washout period so the previous treatment can be fully eliminated from the participant's system. Cross-over assignments are generally used when studying chronic conditions, because symptoms are long-term so investigators have enough time to change treatments and study the effects.

A clinical trial employing a cross-over assignment might assign participants to study arms as follows:

  • Group 1 receives drug X for the first 6 weeks, then drug Y for 6 weeks, with a 6-week washout period in between
  • Group 2 receives drug Y for the first 6 weeks, then drug X for 6 weeks, with a 6-week washout period in between

Or, another example:

  • First 2 months: Group 1 receives the experimental intervention while group 2 receives placebo
  • 2 week washout period
  • Last 2 months: Group 2 receives the experimental intervention while group 1 receives placebo

Factorial assignment

Factorial assignment designs are used when there is more than one intervention to be tested. Trial participants are divided into groups or arms receiving different combinations of two or more interventions/drugs.

The simplest factorial design is a so-called 2x2 factorial assignment, in which two drugs, X and Y, might be tested in four study groups/arms as follows:

  • Group 1 receives drug X and drug Y together
  • Group 2 receives drug X and placebo (control)
  • Group 3 receives drug Y and placebo (control)
  • Group 4 receives two placebos

The appeal of such a design is that it is essentially similar to conducting two parallel group studies (drug X versus placebo and drug Y versus placebo) on the same study population, allowing comparison of the safety and/or efficacy of drug X versus drug Y. Further, potential interaction (synergy or antagonism) between the two interventions might be elucidated, although this is not the goal of such a study. A general assumption underlying such designs is that the two interventions do not interact with one another, in which case the statistical power of such a study design is greater than that of a multi-arm parallel group trial.[ 4 ] However, it is often difficult to be sure that there was no interaction, which can lead to difficulty in interpreting results.

Other Trials to Consider

Patient Care

Arm 1: 3D printed model

Intermittent theta burst stimulation, breast mri screening for high risk patients, single arm longitudinal assessment, 3d model of heart, cbt (surgerypal), standard of care + cbt4cbt+ rc, popular categories.

Tymlos Clinical Trials

Tymlos Clinical Trials

Paid Clinical Trials in Kansas City, MO

Paid Clinical Trials in Kansas City, MO

Paid Clinical Trials in Dallas, TX

Paid Clinical Trials in Dallas, TX

Paid Clinical Trials in Tucson, AZ

Paid Clinical Trials in Tucson, AZ

Paid Clinical Trials in Spokane, WA

Paid Clinical Trials in Spokane, WA

Paid Clinical Trials in Glendale, AZ

Paid Clinical Trials in Glendale, AZ

Paid Clinical Trials in Madison, WI

Paid Clinical Trials in Madison, WI

Paid Clinical Trials in New Mexico

Paid Clinical Trials in New Mexico

Pheochromocytoma Clinical Trials 2023

Pheochromocytoma Clinical Trials 2023

Adrenal Insufficiency Clinical Trials 2023

Adrenal Insufficiency Clinical Trials 2023

Popular guides.

Blinding in Clinical Trials

assignment trial meaning

Parallel study

A parallel study is a type of clinical study in which two or more groups of participants receive different interventions. Participants are assigned to one of the treatment arms at the beginning of the trial and continue in that arm throughout the length of the trial. Assignment to a group usually is randomized. Study participants are only exposed to the treatment that is assigned to the particular study arm they are enrolled in.

For example, a two-arm parallel assignment involves two groups of participants. One group receives drug A, and the other group receives drug B. So during the trial, participants in one group receive drug A “in parallel” to participants in the other group, who receive drug B.

Sourced From NCImetathesaurus ClinicalTrials.gov Glossary of Common Site Terms

  • Glossary: Intervention model

Study Design 101: Randomized Controlled Trial

  • Case Report
  • Case Control Study
  • Cohort Study
  • Randomized Controlled Trial
  • Practice Guideline
  • Systematic Review
  • Meta-Analysis
  • Helpful Formulas
  • Finding Specific Study Types

A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied.

  • Good randomization will "wash out" any population bias
  • Easier to blind/mask than observational studies
  • Results can be analyzed with well known statistical tools
  • Populations of participating individuals are clearly identified

Disadvantages

  • Expensive in terms of time and money
  • Volunteer biases: the population that participates may not be representative of the whole
  • Loss to follow-up attributed to treatment

Design pitfalls to look out for

An RCT should be a study of one population only.

Was the randomization actually "random", or are there really two populations being studied?

The variables being studied should be the only variables between the experimental group and the control group.

Are there any confounding variables between the groups?

Fictitious Example

To determine how a new type of short wave UVA-blocking sunscreen affects the general health of skin in comparison to a regular long wave UVA-blocking sunscreen, 40 trial participants were randomly separated into equal groups of 20: an experimental group and a control group. All participants' skin health was then initially evaluated. The experimental group wore the short wave UVA-blocking sunscreen daily, and the control group wore the long wave UVA-blocking sunscreen daily.

After one year, the general health of the skin was measured in both groups and statistically analyzed. In the control group, wearing long wave UVA-blocking sunscreen daily led to improvements in general skin health for 60% of the participants. In the experimental group, wearing short wave UVA-blocking sunscreen daily led to improvements in general skin health for 75% of the participants.

Real-life Examples

van Der Horst, N., Smits, D., Petersen, J., Goedhart, E., & Backx, F. (2015). The preventive effect of the nordic hamstring exercise on hamstring injuries in amateur soccer players: a randomized controlled trial. The American Journal of Sports Medicine, 43 (6), 1316-1323. https://doi.org/10.1177/0363546515574057

This article reports on the research investigating whether the Nordic Hamstring Exercise is effective in preventing both the incidence and severity of hamstring injuries in male amateur soccer players. Over the course of a year, there was a statistically significant reduction in the incidence of hamstring injuries in players performing the NHE, but for those injured, there was no difference in severity of injury. There was also a high level of compliance in performing the NHE in that group of players.

Natour, J., Cazotti, L., Ribeiro, L., Baptista, A., & Jones, A. (2015). Pilates improves pain, function and quality of life in patients with chronic low back pain: a randomized controlled trial. Clinical Rehabilitation, 29 (1), 59-68. https://doi.org/10.1177/0269215514538981

This study assessed the effect of adding pilates to a treatment regimen of NSAID use for individuals with chronic low back pain. Individuals who included the pilates method in their therapy took fewer NSAIDs and experienced statistically significant improvements in pain, function, and quality of life.

Related Formulas

  • Relative Risk

Related Terms

Blinding/Masking

When the groups that have been randomly selected from a population do not know whether they are in the control group or the experimental group.

Being able to show that an independent variable directly causes the dependent variable. This is generally very difficult to demonstrate in most study designs.

Confounding Variables

Variables that cause/prevent an outcome from occurring outside of or along with the variable being studied. These variables render it difficult or impossible to distinguish the relationship between the variable and outcome being studied).

Correlation

A relationship between two variables, but not necessarily a causation relationship.

Double Blinding/Masking

When the researchers conducting a blinded study do not know which participants are in the control group of the experimental group.

Null Hypothesis

That the relationship between the independent and dependent variables the researchers believe they will prove through conducting a study does not exist. To "reject the null hypothesis" is to say that there is a relationship between the variables.

Population/Cohort

A group that shares the same characteristics among its members (population).

Population Bias/Volunteer Bias

A sample may be skewed by those who are selected or self-selected into a study. If only certain portions of a population are considered in the selection process, the results of a study may have poor validity.

Randomization

Any of a number of mechanisms used to assign participants into different groups with the expectation that these groups will not differ in any significant way other than treatment and outcome.

Research (alternative) Hypothesis

The relationship between the independent and dependent variables that researchers believe they will prove through conducting a study.

Sensitivity

The relationship between what is considered a symptom of an outcome and the outcome itself; or the percent chance of not getting a false positive (see formulas).

Specificity

The relationship between not having a symptom of an outcome and not having the outcome itself; or the percent chance of not getting a false negative (see formulas).

Type 1 error

Rejecting a null hypothesis when it is in fact true. This is also known as an error of commission.

Type 2 error

The failure to reject a null hypothesis when it is in fact false. This is also known as an error of omission.

Now test yourself!

1. Having a volunteer bias in the population group is a good thing because it means the study participants are eager and make the study even stronger.

a) True b) False

2. Why is randomization important to assignment in an RCT?

a) It enables blinding/masking b) So causation may be extrapolated from results c) It balances out individual characteristics between groups. d) a and c e) b and c

Evidence Pyramid - Navigation

  • Meta- Analysis
  • Case Reports
  • << Previous: Cohort Study
  • Next: Practice Guideline >>

Creative Commons License

  • Last Updated: Sep 25, 2023 10:59 AM
  • URL: https://guides.himmelfarb.gwu.edu/studydesign101

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu

Trump is about to go on trial in New York. Here’s what to expect

Republican presidential candidate former President Donald Trump speaks

  • Show more sharing options
  • Copy Link URL Copied!

Donald Trump, the presumptive Republican presidential nominee, is expected to be pulled from the campaign trail for at least two months starting next week as he stands trial in New York, the first criminal prosecution of a former president in American history.

Trump is accused of falsifying business records in an attempt to hide money paid to an adult film actor to prevent her from going public with claims that she and Trump had sex.

It is the only one of the four felony cases Trump currently faces that has a trial date and could be the only one that is completed before election day. Jury selection begins April 15.

What are the charges?

The trial is over whether Trump falsified business records to cover up a $130,000 payment his lawyer Michael Cohen made in the final days of the 2016 campaign to adult film actor Stormy Daniels for her silence about a 2006 sexual encounter she says she had with Trump. He has pleaded not guilty. Trump also denies Daniels’ claim of a sexual encounter.

According to the New York indictment , Trump sent Cohen $420,000 in a dozen installments during his first months as president in 2017 and falsely recorded the payments in Trump Organization internal documents as legal expenses, citing a retainer agreement. That amount includes $130,000 as reimbursement for paying Daniels and additional funds for Cohen.

Prosecutors say there were no legal expenses or retainer agreement at the time.

Former President Donald Trump pumps his fist as he arrives for a GOP fundraiser, Saturday, April 6, 2024, in Palm Beach, Fla. (AP Photo/Lynne Sladky)

World & Nation

Appeals court rejects Trump’s latest attempt to delay April 15 hush money criminal trial

A New York appeals court judge has rejected the latest bid by former President Donald Trump to delay his hush money criminal trial.

April 9, 2024

The key thing Manhattan Dist. Atty. Alvin Bragg must prove to jurors is that Trump instructed Cohen to make the payment to Daniels in an effort to influence the 2016 election by keeping damaging stories about him from being published.

Trump has said he sought to silence Daniels to keep his wife, Melania Trump, from learning about the allegations.

Does Trump have to be at the trial in person?

Yes. Manhattan Judge Juan Manuel Merchan has said he expects Trump in the courtroom every day the court is in session. The trial will take place Mondays, Tuesdays, Thursdays and Fridays for about six to eight weeks.

Trump is under a gag order prohibiting him from making or directing others to make public statements regarding counsel and court staff in the case.

Merchan last week expanded his gag order to stop the former president from attacking family members of the judge, attorneys and staff involved in the case after Trump shared a barrage of social media posts about the judge’s daughter, claiming that her past work for Democratic clients makes her father biased.

Trump said Saturday on social media that going to jail for violating his gag order would be his “great honor.”

“If this Partisan Hack wants to put me in the ‘clink’ for speaking the open and obvious TRUTH, I will gladly become a Modern Day Nelson Mandela,” Trump said Saturday in a Truth Social post.

Merchan has not threatened to jail the former president for disregarding the gag order. Rather, Trump could lose access to the names of jurors, Merchan said, “if he engages in any conduct that threatens the safety and integrity of the jury or the jury selection process.” That would prevent Trump’s legal team from researching the jurors’ public stances or political preferences ahead of the trial. Merchan has already ordered juror names to be kept from the public.

What punishment does Trump face?

All 34 charges are Class E felonies, the lowest category of felonies in New York. Each count carries a maximum prison sentence of four years.

Merchan has indicated that he takes white-collar crime seriously and could put Trump in prison, potentially to serve his terms concurrently if the former president is convicted of more than one count. Merchan could also instead sentence Trump to probation.

What will it mean politically?

Trump will be limited to campaigning only on weekends, evenings and Wednesdays for the length of the trial.

The verdict could come in late June or early July, just before Trump accepts his party’s nomination at the Republican National Convention in Milwaukee in mid-July.

During the Republican primary campaign, Trump used his four indictments to his advantage, portraying himself as a victim of politicized justice and effectively squeezing out his rivals. Whether that narrative holds when Americans see photos of him in a courtroom day after day is unknown.

Three polls conducted last year by the Associated Press and NORC asked respondents how they viewed the legality of Trump’s actions in the four indictments against him. Only one third of respondents said they thought the hush money payment was illegally covered up .

a woman with long, wavy blond hair, wearing a striped top and a necklace

Who will testify?

Cohen, Trump’s former lawyer and longtime fixer, is expected to be prosecutors’ main witness. Cohen pleaded guilty in 2018 to federal campaign finance violations related to facilitating payoffs to Daniels and Karen McDougal, a former Playboy model, as well as to other crimes. He served more than a year in prison.

Daniels could also be called to testify along with McDougal, who in 2018 told CNN that she had a 10-month extramarital affair with Trump that began in 2006. Trump also denies this affair occurred.

McDougal was paid $150,000 by American Media Inc., the publisher of the National Enquirer, in August 2016 for the rights to her story about the alleged affair. The payment to McDougal and a $30,000 payment by the company to a former Trump Tower doorman who claimed that Trump had fathered a child out of wedlock with an employee, are not part of the case but are expected to be introduced by Bragg to show the breadth of the alleged scheme to influence the 2016 election.

American Media Inc. signed a nonprosecution agreement with the Justice Department in which it admitted paying McDougal to avoid her going public about her alleged affair and influencing the 2016 election. David Pecker, the Enquirer’s publisher, could be called to testify.

Former White House communications director Hope Hicks, who was Trump’s campaign press secretary in 2016, could also testify for the prosecution, potentially related to conversations about what Daniels’ claims could do to the campaign.

STORMY -- Pictured: Stormy Daniels -- (Photo by: Peacock)

Stormy Daniels alleges in new documentary that Donald Trump cornered her the night they met

‘I have not forgiven myself because I didn’t shut his a— down in that moment’ in 2006, the adult filmmaker says in ‘Stormy,’ premiering March 18 on Peacock.

March 7, 2024

These are felony charges?

In New York, falsifying business records can be a misdemeanor or can be elevated to a felony if prosecutors prove that the records were falsified in an attempt to conceal another crime.

Bragg has accused Trump of concealing three crimes: a federal campaign finance violation, a state election-law crime and tax fraud. Bragg does not have to charge Trump with those crimes, or even prove those crimes occurred. He just has to prove there was intent to commit or conceal a second crime.

What about the other charges against Trump?

Trump has been indicted in three other criminal cases.

The federal case charging him with subverting the 2020 presidential election is on hold pending a Supreme Court decision on whether Trump can claim presidential immunity for acts taken while in office and avoid prosecution. The court is scheduled to hear oral arguments on April 25. It is expected to wait until the end of June to hand down a written ruling, though it could rule at any time after the oral argument.

The Florida-based federal case accusing Trump of refusing to return classified documents he took when leaving the White House also does not have a trial date. Trump asked for an August trial or a delay until after the election, while prosecutors asked for it to begin in July, soon after the Supreme Court is expected to rule on presidential immunity.

The state-level case in Georgia, in which Trump is accused of scheming to overturn the 2020 election, has been delayed for weeks by an effort by Trump and his co-defendants to remove Fulton County Dist. Atty. Fani Willis from the case because of a romantic relationship with special prosecutor Nathan Wade. Superior Court Judge Scott McAfee recently ruled that either Willis or Wade had to step away from the case, and Wade resigned. Trump’s attorneys have appealed the decision, and a trial date still hasn’t been set.

More to Read

Former President Donald Trump speaks during a press conference at 40 Wall Street after a pre-trial hearing at Manhattan criminal court, Monday, March 25, 2024, in New York. A New York judge has scheduled an April 15 trial date in former President Donald Trump's hush money case. Judge Juan M. Merchan made the ruling Monday.(AP Photo/Frank Franklin II)

Judge issues gag order barring Trump from commenting on witnesses, others in hush money case

March 26, 2024

Former President Donald Trump speaks after a hearing at New York Criminal Court, Monday, March 25, 2024, in New York. New York Judge Juan M. Merchan has scheduled an April 15 trial date in Trump's hush money case. (Justin Lane/Pool Photo via AP)

Litman: Trump is finally facing a criminal trial — and a judge determined to keep it on track

Former President Donald Trump awaits the start of a pre-trial hearing with his defense team at Manhattan criminal, Monday, March 25, 2024, in New York. A judge will weigh on Monday when the former president will go on trial. (AP Photo/Mary Altaffer, Pool)

April 15 trial date set for Trump’s New York hush money case

March 25, 2024

Get the L.A. Times Politics newsletter

Deeply reported insights into legislation, politics and policy from Sacramento, Washington and beyond. In your inbox three times per week.

You may occasionally receive promotional content from the Los Angeles Times.

assignment trial meaning

Sarah D. Wire covers government accountability, the Justice Department and national security for the Los Angeles Times with a focus on the Jan. 6, 2021, insurrection and domestic extremism. She previously covered Congress for The Times. She contributed to the team that won the 2016 Pulitzer Prize for breaking news coverage of the San Bernardino shooting and received the Sigma Delta Chi Award for Washington Correspondence in 2020.

More From the Los Angeles Times

Republican presidential candidate former President Donald Trump visits a Chick-fil-A eatery, Wednesday, April 10, 2024, in Atlanta. (AP Photo/Jason Allen)

New York appeals court rejects Trump’s third request to delay Monday’s hush money trial

April 10, 2024

1877, Mitchell Map of Arizona and New Mexico. (Photo by: Sepia Times/Universal Images Group via Getty Images)

William Howell wrote Arizona’s 1864 abortion ban. He modeled it on California’s

Los Angeles , CA - January 17:Black Lives Matter co-founder Melina Abdullah speaks at a press conference on the steps of City Hall on Tuesday, Jan. 17, 2023 in Los Angeles , CA. (Irfan Khan / Los Angeles Times)

Cornel West names L.A. professor, activist Melina Abdullah as running mate on presidential ticket

Special counsel Jack Smith speaks to reporters Friday, June 9, 2023, in Washington. Former President Donald Trump is facing 37 felony charges related to the mishandling of classified documents according to an indictment unsealed on Friday. (AP Photo/Alex Brandon)

Litman: Jack Smith’s latest push to get Donald Trump’s Jan. 6 trial moving before the election

Trump defends Judge Cannon in fight over classified documents trial

Known for attacking other judges, the indicted former president calls cannon ‘highly respected’ amid dispute over presidential records law.

Former president Donald Trump , whose legal strategy in fighting four indictments has largely centered around attacking the legal system, on Thursday publicly defended the judge overseeing his classified documents mishandling case, calling her “highly respected” even as he regularly lambastes jurists in two of his other criminal cases.

Trump spoke out about the ongoing dispute between special counsel Jack Smith and U.S. District Judge Aileen M. Cannon after Smith filed court papers saying Cannon is pursuing a legal theory that is “wrong.” The special counsel’s filing urged Cannon to rule quickly on whether the Presidential Records Act allowed Trump to keep highly classified documents after leaving the White House, so that if she says it does, he can ask a higher court to overrule her.

Trump, who has been hit with three separate gag orders after criticizing judges, their staffs, their relatives or trial participants, attacked Smith and leaped to Cannon’s defense in a social media post Thursday morning.

The special counsel “should be sanctioned or censured for the way he is attacking a highly respected Judge, Aileen Cannon, who is presiding over his FAKE Documents Hoax case in Florida,” the post said. “He is a lowlife who is nasty, rude, and condescending, and obviously trying to ‘play the ref.’”

Subscribe to The Trump Trials, our weekly email newsletter on Donald Trump's four criminal cases

Nothing about Smith’s filing is likely grounds for punishing him; he did what Cannon told him to do, albeit with a detailed argument about why the reasoning behind the request was wrong. Smith’s criticism was focused on Cannon’s specific order, not her general handling of the trial.

Former federal judge Jeremy Fogel, who now runs the Berkeley Judicial Institute, said the timing of Trump’s statement is as significant as what he said, because it could create the impression that when Cannon rules, she is responding in part to what the former president has said.

“Statements like that give rise to the impression that they are in a position to influence the judge,” Fogel said. “The context is very important. Mr. Trump’s post on Truth Social comes at a critical juncture in the documents case and effectively is a response to the special counsel’s filing yesterday. The best practice in a situation like this is for the judge to make a statement to the effect that their decision will be influenced solely by the facts and the law and not the out-of-court statements of a party or anyone else.”

Robert Mintz, a former federal prosecutor, said Trump’s comments were not surprising but did carry risks for him as a defendant.

“Given the enormously high stakes for both the prosecution and the defense, it is not surprising to see heated rhetoric and strident arguments from both sides,” Mintz said. “What is surprising is just how quickly the judge herself has become the central focus of this case. … Turning the trial into a referendum on the judge may be effective from a public-relations standpoint, but as a legal matter it runs the risk of further politicizing the case and creating even more legal issues that could later backfire on the defense.”

Cannon is weighing Trump’s claims that the Presidential Records Act, or PRA — a law that designates official presidential papers as belonging to the government, rather than individuals — overrides the enforcement of the Espionage Act under which Trump has been charged.

Trump faces 32 counts of violating the Espionage Act, each for a specific classified document that he is alleged to have illegally retained at Mar-a-Lago, his Florida home and private club, after his presidency ended. He has pleaded not guilty to those charges, as well as eight additional counts of obstructing government efforts to retrieve the sensitive papers.

His lawyers argue that the former president had the authority under the PRA to declare even highly classified documents to be his personal records and property. Last month, Cannon asked prosecutors and defense lawyers to submit proposed jury instructions in the case based on two scenarios, one which largely adopts Trump’s legal interpretation. Legal experts say both scenarios significantly misstate the law .

Smith responded in writing this week, just before a midnight deadline, saying Cannon’s instruction was based on a “fundamentally flawed legal premise” that would “distort the trial.”

The filing was unusual in that prosecutors rarely seek direct confrontations with judges overseeing their cases. In doing so, Smith made clear he sees significant potential danger for his prosecution in Cannon’s approach to the PRA issue. How Cannon, a Trump nominee who has been on the bench since late 2020, responds to his filing will be critical.

If she rules that the PRA protects Trump, Smith could appeal. If she retreats from the disputed legal premise, the issue could fade into the background as she decides a pretrial hearing schedule and sets a trial date.

Cannon’s court docket includes a host of undecided legal questions, and prosecutors have urged her to move quickly. It’s possible that on this issue too, she simply takes time to make her next move.

In the meantime, Trump is scheduled to stand trial in New York on April 15 for allegedly falsifying business records to cover up a hush money payment during the 2016 election. Two other criminal cases stemming from Trump’s efforts to block Joe Biden’s 2020 election victory have been moving slowly through pretrial proceedings and appeals.

In the documents case, Trump’s lawyers argue that the PRA allows the former president to keep even highly classified government documents as his personal property.

Prosecutors and legal experts have said such claims badly misstate the law, which was passed to ensure that presidential records would be turned over to the National Archives and Records Administration at the end of a presidency. Cannon’s focus on jury instructions at this stage of the process has flummoxed legal experts, because judges typically first resolve a host of more pressing questions about the beginning of the trial, rather than the jury instructions that come at the end.

Trump’s team said in its own late-night filing that Cannon’s assignment is consistent with Trump’s position that the “prosecution is based on official acts” he took as president — not illegal retention of materials.

The judge told lawyers to write jury instructions for two legal interpretations. Legal experts said she could use those instructions to help inform her eventual ruling on a request that Trump made to dismiss the case because the PRA allowed him to designate any presidential record as personal.

In one scenario, Cannon asked them to craft jury instructions that assume the PRA allows presidents to designate any documents as personal at the end of a presidency — which is what Trump’s legal team has argued he had the authority to do. She also told them to write jury instructions for a second scenario in which the jury has to determine which of the documents Trump is accused of illegally retaining are personal and which are presidential.

Cannon held a hearing more than a month ago to determine a new date for the classified documents trial. Prosecutors asked for a trial date in early July, while Trump’s lawyers asked to wait until after the election or to start in August at the earliest. The judge has not yet ruled, making further delay more likely.

More on the Trump classified documents indictment

The latest: U.S. District Judge Aileen M. Cannon rejected Donald Trump’s bid to have his charges of mishandling classified documents dismissed on the grounds that a federal records law protected him from prosecution.

The case: Trump is charged with taking classified national secrets with him after he left the White House and obstructing government efforts to retrieve them. He has pleaded not guilty . Here’s what to know about the case .

The trial: Judge Cannon has not set a date for the trial. Federal prosecutors have asked her to push it back to July 8, while Trump’s lawyers are trying again to delay the trial until after the presidential election.

The charges: Trump faces 40 separate charges in the documents case. Read the full text of the superseding indictment against Trump and our top takeaways from the indictmen t .

  • After months, Judge Cannon agrees to shield Trump witness names April 9, 2024 After months, Judge Cannon agrees to shield Trump witness names April 9, 2024
  • Judge Cannon shoots down Trump’s presidential records act claim April 4, 2024 Judge Cannon shoots down Trump’s presidential records act claim April 4, 2024
  • Trump defends Judge Cannon in fight over classified documents trial April 4, 2024 Trump defends Judge Cannon in fight over classified documents trial April 4, 2024

assignment trial meaning

Advertisement

Supported by

Gag Order Against Trump Is Expanded to Bar Attacks on Judge’s Family

Donald Trump had in recent days targeted the daughter of Juan Merchan, the judge overseeing his criminal trial in Manhattan, in blistering social media posts.

  • Share full article

Donald Trump stares straight ahead as a man goes into a door in a courtroom hallway.

By Jesse McKinley ,  Ben Protess and William K. Rashbaum

The New York judge overseeing Donald J. Trump’s criminal trial later this month expanded a gag order on Monday to bar the former president from attacking the judge’s family members, who in recent days have become the target of Mr. Trump’s abuse.

Justice Juan M. Merchan last week issued an order prohibiting Mr. Trump from attacking witnesses, prosecutors, jurors and court staff, as well as their relatives. That order, however, did not cover Justice Merchan himself or the Manhattan district attorney, Alvin L. Bragg, who brought the criminal case against the former president.

And although the ruling issued on Monday still does not apply to the judge or the district attorney, Justice Merchan, granting a request from Mr. Bragg’s office, amended the gag order so that it does now cover their families.

In his ruling, the judge cited recent attacks against his daughter, and rejected Mr. Trump’s argument that his statements were “core political speech.”

“This pattern of attacking family members of presiding jurists and attorneys assigned to his cases serves no legitimate purpose,” Justice Merchan wrote. “It merely injects fear in those assigned or called to participate in the proceedings, that not only they, but their family members as well, are ‘fair game’ for defendant’s vitriol.”

Mr. Bragg’s office had asked the judge to clarify that their relatives were included, calling such protection “amply warranted.” Noting Mr. Trump’s track record of issuing “threatening and alarming remarks,” Mr. Bragg’s office warned of “the harms that those family members have suffered.”

The personal connection to the gag order complicated Justice Merchan’s decision. Shortly after last week’s initial gag order, Mr. Trump issued a series of blistering attacks on Mr. Merchan and his daughter, Loren, a political consultant who has worked with Democratic candidates.

Specifically, Mr. Trump had accused Ms. Merchan — falsely — of having posted a photo of him behind bars on an account on X, the platform formerly known as Twitter. Court officials said the account cited by Mr. Trump had been taken over last year by someone other than Ms. Merchan.

On Thursday, Mr. Trump intensified his attacks , identifying Justice Merchan’s daughter by name and accusing her of being “a Rabid Trump Hater, who has admitted to having conversations with her father about me, and yet he gagged me.” The former president then renewed his demands that the judge recuse himself from the case, calling Justice Merchan “totally compromised.”

And on Saturday, in an ominous escalation, Mr. Trump posted a news article to Truth Social that displayed two pictures of Ms. Merchan.

Then, on Tuesday morning, after Justice Merchan’s decision, Mr. Trump called him “corrupt” in a social media post demanding that he be recused and the case dismissed.

“Juan Merchan, GAGGED me so that I can not talk about the corruption and conflicts taking place in his courtroom with respect to a case that everyone, including the D.A., felt should never have been brought,” Mr. Trump wrote . “They can talk about me, but I can’t talk about them??? That sounds fair, doesn’t it?”

Mr. Trump, the first former American president to face criminal prosecution, is scheduled to go on trial on April 15. Mr. Bragg charged him with 34 felony counts of falsifying business records related to the reimbursement of a hush-money payment to hide a sexual encounter with a porn star, Stormy Daniels.

Mr. Trump, once again the presumptive Republican nominee for president, has denied the affair and the charges, which he claims are politically motivated. Mr. Trump and his campaign have also lashed out at the gag order, calling it “unconstitutional.” And his lawyers argued against expanding the gag order to include Justice Merchan and Mr. Bragg’s family, noting that the original order did not cover the judge or the district attorney.

Todd Blanche, one of Mr. Trump’s lawyers, declined to comment on Monday.

Steven Cheung, a spokesman for Mr. Trump’s campaign, called the judge’s amended gag order “unconstitutional,” because, he said, it prevents Mr. Trump from engaging in political speech, “which is entitled to the highest level of protection under the First Amendment.” He added, “The voters of America have a fundamental right to hear the uncensored voice of the leading candidate for the highest office in the land.”

Justice Merchan is just the latest judge to impose a gag order on the former president. A federal appeals court upheld a gag order in Mr. Trump’s federal criminal case in Washington, where he is accused of plotting to overturn the 2020 election.

And in his civil fraud case in New York, Mr. Trump was ordered not to comment on court staff members after he attacked the judge’s principal law clerk. The judge, Arthur F. Engoron, imposed $15,000 in fines on the former president when he ran afoul of that order.

If Mr. Trump violates the order, the judge could impose fines, and in extraordinary circumstances, throw him behind bars.

In a court filing on Monday, Mr. Bragg’s office asked the judge to warn Mr. Trump that he will be punished if he ignores the order, using stark language that underscored the state’s concern about the former president’s words.

“Defendant’s dangerous, violent and reprehensible rhetoric fundamentally threatens the integrity of these proceedings and is intended to intimidate witnesses and trial participants alike — including this court,” Mr. Bragg’s office wrote.

In his five-page ruling, Justice Merchan noted that Mr. Trump had a right “to speak to the American voters freely and to defend himself publicly.” But he sought to balance those rights with the impact of Mr. Trump’s statements on the trial.

“It is no longer just a mere possibility or a reasonable likelihood that there exists a threat to the integrity of the judicial proceedings,” the judge wrote. “The threat is very real.”

Kate Christobek contributed reporting.

Jesse McKinley is a Times reporter covering upstate New York, courts and politics. More about Jesse McKinley

Ben Protess is an investigative reporter at The Times, writing about public corruption. He has been covering the various criminal investigations into former President Trump and his allies. More about Ben Protess

William K. Rashbaum is a senior writer on the Metro desk, where he covers political and municipal corruption, courts, terrorism and law enforcement. He was a part of the team awarded the 2009 Pulitzer Prize for Breaking News. More about William K. Rashbaum

Our Coverage of the Trump Hush-Money Case

The manhattan district attorney has filed charges against former president donald trump over a hush-money payment to a porn star on the eve of the 2016 election..

Taking the Case to Trial: Trump is all but certain to become the first former U.S. president to stand trial on criminal charges after a judge denied his effort to delay the proceeding and confirmed it will begin on April 15 .

Implications for Trump: As the case goes to trial, the former president’s inner circle sees a silver lining in the timing. But Trump wouldn’t be able to pardon himself  should he become president again as he could if found guilty in the federal cases against him.

Michael Cohen: Trump’s former fixer was not an essential witness in the former president’s civil fraud trial in New York  that concluded in January. But he will be when he takes the stand in the hush-money case .

Stormy Daniels: The chain of events flowing from a 2006 encounter that the adult film star said she had with Trump has led to the brink of a historic trial. Here's a look inside the hush-money payout .

James and Jennifer Crumbley, parents of Michigan shooter, sentenced to 10 to 15 years in prison

Jennifer and James Crumbley, the first parents of a mass school shooter in the U.S. to be convicted of involuntary manslaughter for the attack, were sentenced Tuesday in a Michigan courtroom to 10 to 15 years in prison.

The sentence came after the court heard statements from the family members of Tate Myre, 16, Hana St. Juliana, 14, Madisyn Baldwin, 17, and Justin Shilling, 17 . The students were killed when the Crumbleys' son, Ethan, went on a shooting rampage at Oxford High School in Michigan on Nov. 30, 2021.

"You created all of this," Nicole Beausoleil, Baldwin's mother, said through tears. "You failed as parents. The punishment that you face will never be enough."

Beausoleil recalled the final hours of her daughter's life, comparing them with the Crumbleys' actions before and during the shooting. "When you texted 'Ethan don't do it,' I was texting Madisyn: 'I love you. Please call Mom,'" she said.

Reina St. Juliana, the sister of Hana, brought many to tears as she spoke of how her sister would never see her prom, graduation or birthdays.

"I never got to say goodbye," Reina said. "Hana was only 14 ... she took her last breath in a school she hadn't even been in for three months."

Jill Soave, mother of Justin Shilling, asked the judge to hand down the maximum sentence possible to both parents. "The ripple effects of both James' and Jennifer's failures to act have devastated us all," she said. "This tragedy was completely preventable."

Judge Cheryl Matthews addressed both parents before handing down the sentence, "Mr. Crumbley, it's clear to this court that because of you, there was unfettered access to a gun or guns, as well as ammunition in your home.

"Mrs. Crumbley, you glorified the use and possession of these weapons," she added.

Both parents will credited for time already spent in jail.

Matthews also barred the pair or their "agents" from any contact with the families of the four students. She said she would also rule on the parents' rights to contact their son.

Jennifer and James Crumbley also addressed the court ahead of their sentence.

"The dragging this has had on my heart and soul cannot be expressed in words, just as I know this is not going to ease the pain and suffering of the victims and their families," Jennifer Crumbley said.

Jennifer Crumbley used her statement to clarify her trial testimony when she said she would not have done anything differently leading up to the shooting. It was "completely misunderstood," she said Tuesday, adding that her son had seemed "so normal" and that she could not have foreseen the attack.

She said prosecutors tried to paint her and her husband as parents "so horrible, only a school or mass shooter could be bred from."

"We were good parents. We were the average family. We weren't perfect, but we loved our son and each other tremendously," Jennifer Crumbley said.

James Crumbley apologized to the families during his statement.

"I cannot express how much I wish I had known what was going on with him and what was going to happen, because I absolutely would have done a lot of things differently," he said.

Prosecutors asked that each parent be given 10 to 15 years in prison after separate juries found them each guilty of four counts of involuntary manslaughter earlier this year. Their son, 15 at the time of the shooting, is serving a life sentence for the murders.

The parents have shown no remorse for their actions, prosecutors told Matthews in a sentencing memo . They told the juries that the Crumbleys bought their son the gun he used and ignored troubling signs about his mental health.

Legal experts have said the case, which drew national attention, could influence how society views parents' culpability when their children access guns and cause harm with them. Whether the outcome encourages prosecutors to bring charges against parents going forward remains to be seen.

Why were the Crumbleys culpable in their son's crimes?

The Crumbleys' son went on a rampage in the halls of Oxford High School hours after his parents were called to the school by counselors to discuss concerns over disturbing drawings he had done on a math assignment. Prosecutors said the parents didn't tell school officials their son had access to guns in the home and left him at school that day.

James Crumbley purchased the gun used in the shooting, and in a post on social media Jennifer Crumbley said it was a Christmas present for the boy. The prosecution said the parents could have prevented the shooting if they had taken ordinary care to secure the gun and taken action when it was clear their son was having severe mental health struggles.

The prosecution cited messages the teen sent months before the shooting to his mother that said he saw a "demon" in their house and that clothes were flying around. He also texted a friend that he had "paranoia" and was hearing voices. In a journal, he wrote: "I have zero HELP for my mental problems and it's causing me to shoot up" the school.

The Crumbleys also tried to flee law enforcement when it became clear they would face charges, prosecutors said.

Defense attorneys said the parents never foresaw their son's actions. Jennifer Crumbley portrayed herself as an attentive mother when she took the stand in her own defense, and James Crumbley's lawyer said that the gun didn't really belong to the son, that the father properly secured the gun and that he didn't allow his son to use it unsupervised. In an interview with the Detroit Free Press, part of the USA TODAY Network,  the jury foreman in James Crumbley's trial said  storage of the gun was the key testimony that drove him to convict.

Parents asked for house arrest, time served

James Crumbley has asked to be sentenced to time already served since his arrest in December 2021, according to the prosecutors' sentencing memo. Jennifer Crumbley hoped to serve out a sentence on house arrest while living in her lawyer's guest house.

Prosecutors rejected the requests in the memo to the judge, saying neither had shown remorse for their roles in the deaths of the four children. James Crumbley also was accused of threatening Oakland County Prosecutor Karen McDonald in a jail phone conversation, Keast said, showing his "chilling lack of remorse."

"Such a proposed sentence is a slap in the face to the severity of tragedy caused by (Jennifer Crumbley's) gross negligence, the victims and their families," Assistant Oakland County Prosecutor Marc Keast wrote of the mother's request in a sentencing memo.

Each involuntary manslaughter count carries up to 15 years in prison, though typically such sentences are handed down concurrently, not consecutively. The judge also has the discretion to go above or below the state advisory guidelines, which recommended a sentencing range of 43 to 86 months − or a maximum of about seven years. The state guideline is advisory, based on post-conviction interviews and facts of the case.

IMAGES

  1. Assignment. Meaning, types, importance, and good characteristics of assignment

    assignment trial meaning

  2. Assignment

    assignment trial meaning

  3. 😍 Good assignment. 6 Tips to Write a Good Assignment. 2019-02-11

    assignment trial meaning

  4. How to Research an Assignment (A 5-Step Guide)

    assignment trial meaning

  5. Assignment

    assignment trial meaning

  6. What is the Difference Between Assignment and Assessment

    assignment trial meaning

VIDEO

  1. english assignment ( change in meaning)

  2. Polygram Records Sdn Bhd V The Search

  3. What’s the assignment?

  4. September 16, 2021 Assignment problem| Part 2

  5. Aiken Jr. pleads not guilty, trial given track 3 assignment

  6. Difference between Project and Assignment B.Ed

COMMENTS

  1. NIH's Definition of a Clinical Trial

    NIH Definition of a Clinical Trial. A research study in which one or more human subjects are prospectively assigned prospectively assigned The term "prospectively assigned" refers to a pre-defined process (e.g., randomization) specified in an approved protocol that stipulates the assignment of research subjects (individually or in clusters) to one or more arms (e.g., intervention, placebo, or ...

  2. Sequential, Multiple Assignment, Randomized Trial Designs

    This JAMA Guide to Statistics and Methods explains sequential, multiple assignment, randomized trial (SMART) study designs, in which some or all participants are randomized at 2 or more decision points depending on the participant's response to prior treatment.

  3. Who knew? The misleading specificity of "double-blind" and what to do

    Some authors prefer "masking" to "blinding," although the meaning of either term in a clinical trial may not be readily apparent to nonnative English speakers [18, 22]. ... However, assignment concealment does not work well as a label. We concluded that "a concealed assignment trial" was unlikely to replace "a blinded trial."

  4. Common Terms You Should Know When Enrolling in a Clinical Trial

    There are typically four phases of a clinical trial. Phase I is the administration of a drug or device to a small group to identify possible side effects and determine proper dose. Phase II is done to gauge whether the treatment is effective while continuing to evaluate safety. Phase III compares a new drug or device against the current ...

  5. Does Your Human Subjects Research Study Meet the NIH Definition of a

    Your study is considered to meet the NIH definition of a clinical trial even if: Your study uses healthy participants, or does not include a comparison group (e.g., placebo or control) Your study is only designed to assess the pharmacokinetics, safety, and/or maximum tolerated dose of an investigational drug;

  6. Key Concepts of Clinical Trials: A Narrative Review

    This narrative review is based on a course in clinical trials developed by one of the authors (DJM), and is supplemented by a PubMed search predating January 2011 using the keywords "randomized controlled trial," "patient/clinical research," "ethics," "phase IV," "data and safety monitoring board," and "surrogate endpoint.".

  7. PDF NIH Definition of Clinical Trial Case Studies

    Important features that distinguish a clinical trial from a clinical study are whether there is prospective assignment of an intervention, a study design that evaluates the effect of the intervention on the participants, and a health-related biomedical or behavioral outcome. If these features are present, the study is a clinical trial.

  8. Clinical Trial Glossary Final

    TERM DEFINITION . implementation of the trial and the dissemination of the study results. The goal is to have clinical trials more ... using an element of chance to determine the assignments in ...

  9. Clinical Trial Basics: Intervention Models in Clinical Trials

    Parallel assignment. Parallel assignment is the most common type of intervention model used in clinical research, wherein trial participants are divided into two or more groups, each receiving a different medical intervention throughout the duration of the study.[2] Participants are given one type of treatment, remaining in the same treatment ...

  10. Considerations for open-label clinical trials: design, conduct, and

    meaning that it should be impossible to knowor to predict what assignment a subject will be given prior to ... trial conduct (e.g., recruitment, adherence, and retention) and impair ...

  11. Parallel study

    Parallel study. A parallel study is a type of clinical study in which two or more groups of participants receive different interventions. Participants are assigned to one of the treatment arms at the beginning of the trial and continue in that arm throughout the length of the trial. Assignment to a group usually is randomized.

  12. Research Guides: Study Design 101: Randomized Controlled Trial

    Definition. A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied. Advantages. Good randomization will "wash out" any ...

  13. Clinical Trial Design: Parallel and Crossover Studies

    Crossover studies typically require fewer patients than a parallel study since each patient acts as his or her own control, meaning that they receive both the study drug as well as the placebo or standard of care treatment. However, crossover studies can take longer to complete since patients will receive multiple treatments during the trial.

  14. What can I expect at the "Assignment TO SET Trial Date"?

    Website. (253) 948-9466. Message View Profile. Posted on Jun 3, 2012. What you can expect at the "Assignment to Set Trial Date," is just that, set the trial date. The trial is normally set in the Judge's Court Room in a "hearing" type of setting with both parties present. There is no argument about the issues, simply a scheduling of the trial.

  15. Randomized Controlled Trial (RCT) Overview

    A randomized controlled trial (RCT) is a prospective experimental design that randomly assigns participants to an experimental or control group. RCTs are the gold standard for establishing causal relationships and ruling out confounding variables and selection bias. Researchers must be able to control who receives the treatments and who are the ...

  16. PDF Family Law Trial and Long Cause Evidentiary Hearing Assignment Calendar

    trial and evidentiary hearing dates prior to those listed below available for D.V.R.O., priority and expedited . matters at the time of the trial assignment hearing. Please Note: Family Law matters are civil matters. The Trial (or long-cause Evidentiary Hearing) date . provided below is identified as the first day of the trial week.

  17. What to expect during Trump's hush-money trial in New York

    What will it mean politically? Trump will be limited to campaigning only on weekends, evenings and Wednesdays for the length of the trial. The verdict could come in late June or early July, just ...

  18. Jack Smith Gets a Bit of What He Wanted

    In an unusual display of frustration, Smith wrote in a court filing on Tuesday night that one of Cannon's recent orders wasn't merely slowing down the case, but was based on "a fundamentally ...

  19. Trump defends Judge Cannon in fight over classified documents trial

    Known for attacking other judges, former president Donald Trump called U.S. Judge Aileen M. Cannon "highly respected" amid a dispute over presidential records law.

  20. James and Jennifer Crumbley trials: Parents of Oxford school shooter

    James and Jennifer Crumbley, the parents of the teenager who killed four students in the 2021 school shooting in Oxford, Michigan, were each sentenced to 10 to 15 years in prison on Tuesday, weeks ...

  21. What is trial setting and judicial assignment mean

    A trial setting conference is usually when the Court asks each side about the status of the case, and if the parties still have a dispute, they set the matter for trial. If you are worried, contact the self-help center at the courthouse who can provide you assistance. Best of luck. Judicial assignment means that the court system has assigned ...

  22. Trump Gag Order Is Expanded to Stop Attacks on Judge Merchan's Family

    The New York judge overseeing Donald J. Trump's criminal trial later this month expanded a gag order on Monday to bar the former president from attacking the judge's family members, who in ...

  23. How legal fights and stalling by judge could push Trump documents trial

    A criminal case that was once viewed as the most open-and-shut prosecution against former President Donald Trump has been mired in delay, unresolved logistical questions and fringe legal arguments ...

  24. Crumbley trial: Parents of mass shooter Ethan Crumbley to be sentenced

    0:46. Jennifer and James Crumbley, the first parents of a mass school shooter in the U.S. to be convicted of involuntary manslaughter for the shooting, are set to be sentenced on Tuesday ...

  25. PDF NIH Definition of Clinical Trial Case Studies

    Important features that distinguish a clinical trial from a clinical study are whether there is prospective assignment of an intervention, a study design that evaluates the effect of the intervention on the participants, and a health-related biomedical or behavioral outcome. If these features are present, the study is a clinical trial.