Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

1.1: Research Designs

  • Last updated
  • Save as PDF
  • Page ID 32913

  • Yang Lydia Yang
  • Kansas State University

Research Designs

In the early 1970’s, a man named Uri Geller tricked the world: he convinced hundreds of thousands of people that he could bend spoons and slow watches using only the power of his mind. In fact, if you were in the audience, you would have likely believed he had psychic powers. Everything looked authentic—this man had to have paranormal abilities! So, why have you probably never heard of him before? Because when Uri was asked to perform his miracles in line with scientific experimentation, he was no longer able to do them. That is, even though it seemed like he was doing the impossible, when he was tested by science, he proved to be nothing more than a clever magician.

When we look at dinosaur bones to make educated guesses about extinct life, or systematically chart the heavens to learn about the relationships between stars and planets, or study magicians to figure out how they perform their tricks, we are forming observations—the foundation of science. Although we are all familiar with the saying “seeing is believing,” conducting science is more than just what your eyes perceive. Science is the result of systematic and intentional study of the natural world. And soical science is no different. In the movie Jerry Maguire , Cuba Gooding, Jr. became famous for using the phrase, “Show me the money!” In education, as in all sciences, we might say, “Show me the data!”

One of the important steps in scientific inquiry is to test our research questions, otherwise known as hypotheses. However, there are many ways to test hypotheses in educational research. Which method you choose will depend on the type of questions you are asking, as well as what resources are available to you. All methods have limitations, which is why the best research uses a variety of methods.

Experimental Research

If somebody gave you $20 that absolutely had to be spent today, how would you choose to spend it? Would you spend it on an item you’ve been eyeing for weeks, or would you donate the money to charity? Which option do you think would bring you the most happiness? If you’re like most people, you’d choose to spend the money on yourself (duh, right?). Our intuition is that we’d be happier if we spent the money on ourselves.

Knowing that our intuition can sometimes be wrong, Professor Elizabeth Dunn (2008) at the University of British Columbia set out to conduct an experiment on spending and happiness. She gave each of the participants in her experiment $20 and then told them they had to spend the money by the end of the day. Some of the participants were told they must spend the money on themselves, and some were told they must spend the money on others (either charity or a gift for someone). At the end of the day she measured participants’ levels of happiness using a self-report questionnaire.

In an experiment, researchers manipulate, or cause changes, in the independent variable , and observe or measure any impact of those changes in the dependent variable . The independent variable is the one under the researcher’s control, or the variable that is intentionally altered between groups. In the case of Dunn’s experiment, the independent variable was whether participants spent the money on themselves or on others. The dependent variable is the variable that is not manipulated at all, or the one where the effect happens. One way to help remember this is that the dependent variable “depends” on what happens to the independent variable. In our example, the participants’ happiness (the dependent variable in this experiment) depends on how the participants spend their money (the independent variable). Thus, any observed changes or group differences in happiness can be attributed to whom the money was spent on. What Dunn and her colleagues found was that, after all the spending had been done, the people who had spent the money on others were happier than those who had spent the money on themselves. In other words, spending on others causes us to be happier than spending on ourselves. Do you find this surprising?

But wait! Doesn’t happiness depend on a lot of different factors—for instance, a person’s upbringing or life circumstances? What if some people had happy childhoods and that’s why they’re happier? Or what if some people dropped their toast that morning and it fell jam-side down and ruined their whole day? It is correct to recognize that these factors and many more can easily affect a person’s level of happiness. So how can we accurately conclude that spending money on others causes happiness, as in the case of Dunn’s experiment?

The most important thing about experiments is random assignment . Participants don’t get to pick which condition they are in (e.g., participants didn’t choose whether they were supposed to spend the money on themselves versus others). The experimenter assigns them to a particular condition based on the flip of a coin or the roll of a die or any other random method. Why do researchers do this? With Dunn’s study, there is the obvious reason: you can imagine which condition most people would choose to be in, if given the choice. But another equally important reason is that random assignment makes it so the groups, on average, are similar on all characteristics except what the experimenter manipulates.

By randomly assigning people to conditions (self-spending versus other-spending), some people with happy childhoods should end up in each condition. Likewise, some people who had dropped their toast that morning (or experienced some other disappointment) should end up in each condition. As a result, the distribution of all these factors will generally be consistent across the two groups, and this means that on average the two groups will be relatively equivalent on all these factors. Random assignment is critical to experimentation because if the only difference between the two groups is the independent variable, we can infer that the independent variable is the cause of any observable difference (e.g., in the amount of happiness they feel at the end of the day).

Here’s another example of the importance of random assignment: Let’s say your class is going to form two basketball teams, and you get to be the captain of one team. The class is to be divided evenly between the two teams. If you get to pick the players for your team first, whom will you pick? You’ll probably pick the tallest members of the class or the most athletic. You probably won’t pick the short, uncoordinated people, unless there are no other options. As a result, your team will be taller and more athletic than the other team. But what if we want the teams to be fair? How can we do this when we have people of varying height and ability? All we have to do is randomly assign players to the two teams. Most likely, some tall and some short people will end up on your team, and some tall and some short people will end up on the other team. The average height of the teams will be approximately the same. That is the power of random assignment!

Other considerations

In addition to using random assignment, you should avoid introducing confounding variables into your experiments. Confounding variables are things that could undermine your ability to draw causal inferences. For example, if you wanted to test if a new happy pill will make people happier, you could randomly assign participants to take the happy pill or not (the independent variable) and compare these two groups on their self-reported happiness (the dependent variable). However, if some participants know they are getting the happy pill, they might develop expectations that influence their self-reported happiness. This is sometimes known as a placebo effect . Sometimes a person just knowing that he or she is receiving special treatment or something new is enough to actually cause changes in behavior or perception: In other words, even if the participants in the happy pill condition were to report being happier, we wouldn’t know if the pill was actually making them happier or if it was the placebo effect—an example of a confound. Even experimenter expectations can influence the outcome of a study. For example, if the experimenter knows who took the happy pill and who did not, and the dependent variable is the experimenter’s observations of people’s happiness, then the experimenter might perceive improvements in the happy pill group that are not really there.

One way to prevent these confounds from affecting the results of a study is to use a double-blind procedure. In a double-blind procedure, neither the participant nor the experimenter knows which condition the participant is in. For example, when participants are given the happy pill or the fake pill, they don’t know which one they are receiving. This way the participants shouldn’t experience the placebo effect, and will be unable to behave as the researcher expects (participant demand). Likewise, the researcher doesn’t know which pill each participant is taking (at least in the beginning—later, the researcher will get the results for data-analysis purposes), which means the researcher’s expectations can’t influence his or her observations. Therefore, because both parties are “blind” to the condition, neither will be able to behave in a way that introduces a confound. At the end of the day, the only difference between groups will be which pills the participants received, allowing the researcher to determine if the happy pill actually caused people to be happier.

Quasi-Experimental Designs

What if you want to study the effects of marriage on a variable? For example, does marriage make people happier? Can you randomly assign some people to get married and others to remain single? Of course not. So how can you study these important variables? You can use a quasi-experimental design . A quasi-experimental design is similar to experimental research, except that random assignment to conditions is not used. Instead, we rely on existing group memberships (e.g., married vs. single). We treat these as the independent variables, even though we don’t assign people to the conditions and don’t manipulate the variables. As a result, with quasi-experimental designs causal inference is more difficult. For example, married people might differ on a variety of characteristics from unmarried people. If we find that married participants are happier than single participants, it will be hard to say that marriage causes happiness, because the people who got married might have already been happier than the people who have remained single.

Because experimental and quasi-experimental designs can seem pretty similar, let’s take another example to distinguish them. Imagine you want to know who is a better professor: Dr. Smith or Dr. Khan. To judge their ability, you’re going to look at their students’ final grades. Here, the independent variable is the professor (Dr. Smith vs. Dr. Khan) and the dependent variable is the students’ grades. In an experimental design, you would randomly assign students to one of the two professors and then compare the students’ final grades. However, in real life, researchers can’t randomly force students to take one professor over the other; instead, the researchers would just have to use the preexisting classes and study them as-is (quasi-experimental design). Again, the key difference is random assignment to the conditions of the independent variable. Although the quasi-experimental design (where the students choose which professor they want) may seem random, it’s most likely not. For example, maybe students heard Dr. Smith sets low expectations, so slackers prefer this class, whereas Dr. Khan sets higher expectations, so smarter students prefer that one. This now introduces a confounding variable (student intelligence) that will almost certainly have an effect on students’ final grades, regardless of how skilled the professor is. So, even though a quasi-experimental design is similar to an experimental design (i.e., it has a manipulated independent variable), because there’s no random assignment, you can’t reasonably draw the same conclusions that you would with an experimental design.

Non-Experimental Studies

When scientists passively observe and measure phenomena it is called non-experimental research. Here, we do not intervene and change behavior, as we do in experiments. In non-experimental research, we identify patterns of relationships, but we usually cannot infer what causes what. Importantly, with non-experimental research, you can examine only two variables at a time, no more and no less.

So, what if you wanted to test whether spending on others is related to happiness, but you don’t have $20 to give to each participant? You could use a non-experimental research — which is exactly what Professor Dunn did, too. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were. This indicates a positive correlation!

If generosity and happiness are positively correlated, should we conclude that being generous causes happiness? Similarly, if height and pathogen prevalence are negatively correlated, should we conclude that disease causes shortness? From a correlation alone, we can’t be certain. For example, in the first case it may be that happiness causes generosity, or that generosity causes happiness. Or, a third variable might cause both happiness and generosity, creating the illusion of a direct link between the two. For example, wealth could be the third variable that causes both greater happiness and greater generosity. This is why correlation does not mean causation —an often repeated phrase among psychologists.

One particular type of non-experimental research is the longitudinal study . Longitudinal studies are typically observational in nature. They track the same people over time. Some longitudinal studies last a few weeks, some a few months, some a year or more. Some studies that have contributed a lot to a given topic by following the same people over decades. For example, one study followed more than 20,000 Germans for two decades. From these longitudinal data, psychologist Rich Lucas (2003) was able to determine that people who end up getting married indeed start off a bit happier than their peers who never marry. Longitudinal studies like this provide valuable evidence for testing many theories in social sciences, but they can be quite costly to conduct, especially if they follow many people for many years.

Tradeoffs in Research

Even though there are serious limitations to non-experimental and quasi-experimental research, they are not poor cousins to experiments designs. In addition to selecting a method that is appropriate to the question, many practical concerns may influence the decision to use one method over another. One of these factors is simply resource availability—how much time and money do you have to invest in the research? Often, we survey people even though it would be more precise—but much more difficult—to track them longitudinally. Especially in the case of exploratory research, it may make sense to opt for a cheaper and faster method first. Then, if results from the initial study are promising, the researcher can follow up with a more intensive method.

Beyond these practical concerns, another consideration in selecting a research design is the ethics of the study. For example, in cases of brain injury or other neurological abnormalities, it would be unethical for researchers to inflict these impairments on healthy participants. Nonetheless, studying people with these injuries can provide great insight into human mind (e.g., if we learn that damage to a particular region of the brain interferes with emotions, we may be able to develop treatments for emotional irregularities). In addition to brain injuries, there are numerous other areas of research that could be useful in understanding the human mind but which pose challenges to a true experimental design — such as the experiences of war, long-term isolation, abusive parenting, or prolonged drug use. However, none of these are conditions we could ethically experimentally manipulate and randomly assign people to. Therefore, ethical considerations are another crucial factor in determining an appropriate research design.

Research Methods: Why You Need Them

Just look at any major news outlet and you’ll find research routinely being reported. Sometimes the journalist understands the research methodology, sometimes not (e.g., correlational evidence is often incorrectly represented as causal evidence). Often, the media are quick to draw a conclusion for you. After reading this module, you should recognize that the strength of a scientific finding lies in the strength of its methodology. Therefore, in order to be a savvy producer and/or consumer of research, you need to understand the pros and cons of different methods and the distinctions among them.

Dunn, E. W., Aknin, L. B., & Norton, M. I. (2008). Spending money on others promotes happiness. Science, 319(5870), 1687–1688. doi:10.1126/science.1150952

Lucas, R. E., Clark, A. E., Georgellis, Y., & Diener, E. (2003). Re-examining adaptation and the setpoint model of happiness: Reactions to changes in marital status. Journal of Personality and Social Psychology, 84, 527–539.

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

14.2 True experiments

Learning objectives.

Learners will be able to…

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

A true experiment , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables (as treatments) are manipulated by the researcher, subjects are randomly assigned (i.e., random assignment) to different treatment levels, and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its ability to increase internal validity and help establish causality through treatment manipulation, while controlling for the effects of extraneous variables. As such they are best suited for explanatory research questions.

In true experimental design, research subjects are assigned to either an experimental group, which receives the treatment or intervention being investigated, or a control group, which does not.  Control groups may receive no treatment at all, the standard treatment (which is called “treatment as usual” or TAU), or a treatment that entails some type of contact or interaction without the characteristics of the intervention being investigated.  For example, the control group may participate in a support group while the experimental group is receiving a new group-based therapeutic intervention consisting of education and cognitive behavioral group therapy.

After determining the nature of the experimental and control groups, the next decision a researcher must make is when they need to collect data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle data collection another way? Below, we’ll discuss three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often difficult and can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The participants in the experimental group will receive CBT, while the participants in the control group will receive a series of videos about social anxiety.

Classical experiments (pretest posttest control group design)

The elements of a classical experiment are (1) random assignment of participants into an experimental and control group, (2) a pretest to assess the outcome(s) of interest for each group, (3) delivery of an intervention/treatment to the experimental group, and (4) a posttest to both groups to assess potential change in the outcome(s).

When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the components of the experiment. Table 14.2 starts us off by laying out what the abbreviations mean.

Figure 14.1 depicts a classical experiment using our example of assessing the intervention of CBT for social anxiety.  In the figure, RA denotes random assignment to the experimental group A and RB is random assignment to the control group B. O 1 (observation 1) denotes the pretest, X e denotes the experimental intervention, and O 2 (observation 2) denotes the posttest.

if a research design employs random assignment it must be

The more general, or universal, notation for classical experimental design is shown in Figure 14.2.

if a research design employs random assignment it must be

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way (Figure 14.3), with X i denoting treatment as usual:

if a research design employs random assignment it must be

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes temporality , a key component of a causal relationship. By administering the pretest, researchers can assess if the change in the outcome occured after the intervention. Assuming there is a change in the scores between the pretest and posttest, we would be able to say that yes, the change did occur after the intervention.

Posttest only control group design

Posttest only control group design involves only giving participants a posttest, just like it sounds. But why would you use this design instead of using a pretest posttest design? One reason could be to avoid potential testing effects that can happen when research participants take a pretest.

In research, the testing effect threatens internal validity when the pretest changes the way the participants respond on the posttest or subsequent assessments (Flannelly, Flannelly, & Jankowski, 2018). [1] A common example occurs when testing interventions for cognitive impairment in older adults. By taking a cognitive assessment during the pretest, participants get exposed to the items on the assessment and get to “practice” taking it (see for example, Cooley et al., 2015). [2] They may perform better the second time they take it because they have learned how to take the test, not because there have been changes in cognition. This specific type of testing effect is called the practice effect . [3]

The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome. Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the posttest, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is. To mitigate the influence of testing effects, posttest only control group designs do not administer a pretest to participants. Figure 14.4 depicts this.

if a research design employs random assignment it must be

A drawback to the posttest only control group design is that without a baseline measurement, establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. The posttest only control group design relies on the random assignment to groups to create groups that are equivalent at baseline because, without a pretest, researchers cannot assess whether the groups are equivalent before the intervention. Researchers must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect threatens internal validity is with the Solomon four group design. Basically, as part of this experiment, there are two experimental groups and two control groups. The first pair of experimental/control groups receives both a pretest and a posttest. The other pair receives only a posttest (Figure 14.5). In addition to addressing testing effects, this design also addresses the problems of establishing time order and equivalent groups in posttest only control group designs.

if a research design employs random assignment it must be

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our posttest measures, and groups C and D would take only our posttest measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

Key Takeaways

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest posttest research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Posttest only research design involves only one point of measurement—after the intervention or treatment. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a posttest, while the other receives only a posttest. This can help uncover the influence of testing effects.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a researcher?
  • What hypothesis(es) would you test using this true experiment?

TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

Imagine you are interested in studying child welfare practice. You are interested in learning more about community-based programs aimed to prevent child maltreatment and to prevent out-of-home placement for children.

  • Think about a true experiment you might conduct for this research project. Which design would be best for this research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated) for you to carry your true experimental design in the real-world as a researcher?
  • Flannelly, K. J., Flannelly, L. T., & Jankowski, K. R. B. (2018). Threats to the internal validity of experimental and quasi-experimental research in healthcare. Journal of Health Care Chaplaincy, 24 (3), 107-130. https://doi.org/10.1080/08854726.20 17.1421019 ↵
  • Cooley, S. A., Heaps, J. M., Bolzenius, J. D., Salminen, L. E., Baker, L. M., Scott, S. E., & Paul, R. H. (2015). Longitudinal change in performance on the Montreal Cognitive Assessment in older adults. The Clinical Neuropsychologist, 29(6), 824-835. https://doi.org/10.1080/13854046.2015.1087596 ↵
  • Duff, K., Beglinger, L. J., Schultz, S. K., Moser, D. J., McCaffrey, R. J., Haase, R. F., Westervelt, H. J., Langbehn, D. R., Paulsen, J. S., & Huntington's Study Group (2007). Practice effects in the prediction of long-term cognitive outcome in three patient samples: a novel prognostic index. Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists, 22(1), 15–24. https://doi.org/10.1016/j.acn.2006.08.013 ↵

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

improvements in cognitive assessments due to exposure to the instrument

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Library Homepage

Research Process Guide

  • Step 1 - Identifying and Developing a Topic
  • Step 2 - Narrowing Your Topic
  • Step 3 - Developing Research Questions
  • Step 4 - Conducting a Literature Review
  • Step 5 - Choosing a Conceptual or Theoretical Framework
  • Step 6 - Determining Research Methodology
  • Step 6a - Determining Research Methodology - Quantitative Research Methods
  • Step 6b - Determining Research Methodology - Qualitative Design
  • Step 7 - Considering Ethical Issues in Research with Human Subjects - Institutional Review Board (IRB)
  • Step 8 - Collecting Data
  • Step 9 - Analyzing Data
  • Step 10 - Interpreting Results
  • Step 11 - Writing Up Results

Step 6a: Determining Research Methodology - Quantitative Research Methods

Quantitative research methods have a few designs to choose from, mostly rooted in the postpositivist worldview. The experimental design, quasi-experimental design and single subject experimental design (Bloomfield & Fisher, 2019; Creswell & Creswell, 2018). Single- subject or applied behavioral analysis consists of administering an experimental treatment to a person or small group of people over an extended period of time. Of the quasi experimental designs, subcategories; causal-comparative design and correlational design. Causal-comparative research allows for the investigator to compare two or more groups in terms of a treatment that has already happened. For correlational design, the researcher is looking to examine the relationship between variables or set of scores (Bloomfield & Fisher, 2019; Creswell & Creswell, 2018).

Generally, these kinds of designs fall into two categories, Survey Research and Experimental Research. Survey research uses a quantitative (numerical) description of trends, attitudes, opinions of a population by examining a sample of that population through questionnaires or structured interviews for data collection (Fowler, 2008; Fowler, 2014; Bloomfield & Fisher, 2019; Creswell & Creswell, 2018). These studies can be cross-sectional and longitudinal. Ultimately, the goal is to analyze the data and have the finding be generalizable to the entire population.

Experimental research uses the scientific method to determine if a specific treatment influences an outcome. This design requires random assignment of treatment conditions, and the quasi-experimental and single subject  version of this uses nonrandomized assignment of treatment (Bloomfield & Fisher, 2019). Survey Methods

Survey research methods are widely used and follow a standard format.  Examining survey research in scholarly journals would be a great way to familiarize yourself with the format and determine how to do it and, more importantly, if this method is right for your research.

How to prepare to do survey research? Creswell and Creswell (2018) as well as Fowler (2014), have provided basic framework for the rationale of survey research that you consider as you make the decision abou what kind of methods you will employ to conduct your inquiry.

  • Identify the purpose of your survey research- what variables interest you? This means start sketching out a purpose statement such as “ The primary purpose of this study is to empirically evaluate whether the number of overtime hours predicts subsequent burnout symptoms in emergency room nurses” (Creswell & Creswell, 2018, p. 149).
  • Write out why a survey method is the appropriate kind of approach for your study. It may be beneficial to discuss the advantages of survey research and the disadvantages of other methods.
  • Decide whether the survey will be cross-sectional or longitudinal. Meaning, will you gather the data at the same time  or collect it over time?
  • How will the data be collected, meaning how will the survey be filled out? Mail, phone, internet, structured interviews? Please provide the rationale for your choice.
  • Discuss your population and sampling - who is the target population? What is the size? Who are they in terms of demographic information? How do you plan to identify individuals in this population? Random sampling or systematic sampling and what is the rationale behind your choice? You should really aim for a particular fraction of the population that is typical based on past studies conducted on this topic.
  • Determine the estimated size of the correlation (r) . Using our above example, you might be looking at the relationship between hours worked and burnout symptoms. This might be difficult to determine if no other studies have been completed with these two variables involved.
  • Determine the two tailed value (a) This is called a Type I error and deals with the risk associated with a false positive. Typically, the accepted Type 1 alpha value is set at 0.5%, meaning there is a 5% probability that there is a significant (non-zero) relationship between the two variables (number of hours worked and burnout symptoms.
  • A beta value (b) is called a Type II error which refers to the risk we take saying there is no significant effect when there is a significant effect (false negative) Beta value is commonly set at .20.
  • By plugging in these numbers, r, alpha, and beta into a power analysis tool, you will be able to determine your sample size.

Survey Instrument

As you determine what instrument you will use, a survey you create or that has been used and created by someone else, you should consider the following (Fowler, 2008; Creswell & Creswell, 2018; Bloomfield & Fisher, 2019):

  • Name and give credit to the instrument and the researchers who developed it.  Or discuss your use of proprietary or free survey products online (Qualtrics, Survey Monkey).
  • Content validity (did the survey measure what it was intended to measure?)
  • Predictive validity (do scores predict a criterion measure? Do the scores correlate with other results?)
  • Construct validity (does the survey measure hypothetical concepts?)
  • What is the internal consistency of the survey? Does it perform in the same way with each variable and each item on the survey behaves in the same way? You can use the test-retest reliability, whether the instrument is stable over time.

Experimental Design

There are three components to experimental design which also follows a standard form: participants and design, procedure and measurement. There are a few considerations that Bouma et al. (2012), Bloomfield and Fisher (2019), Creswell and Creswell (2018) suggest you determine early on in your design.

  • Random Sampling - the sampling technique in which each sample has an equal probability of being chosen and is meant to be an unbiased representation of the total population.
  • Quota Sampling - is defined as a non-probability sampling method in which researchers create a sample involving individuals that represent a population.
  • Convenience Sampling - defined as a method adopted by researchers where they collect market research data from a conveniently available pool of participants.
  • Probability Sampling - refers to sampling techniques which  are aiming to identify a representative sample from which to collect data.
  • The idea of randomized assignment is a distinct feature of experimental design. When participants are randomly assigned to groups, the process is called a true experiment. If this is the case with your study, you should discuss how, when and why you are assigning participants to treatment groups. You need to describe in detail how each participant is placed to eliminate systematic bias in assigning participants. If your study design deals with more than one variable or treatment that cannot utilize random assignment (i.e. female school children benefit from a different teaching technique than male school children), this would change your design from true experimental design to a quasi-experimental design.
  • As with survey research, it would be essential to conduct a power analysis for sample size. The steps for power analysis are the same as survey design, however the focus for a power analysis of experimental design is about measuring effect size, meaning the estimated differences between groups of manipulated variables of interest. Please review steps  for power analysis in the survey research section.
  • Identify variables in the study, specifically the dependent and independent variables, as well as any other variables you intend to measure in the study. For example, you might want to think about participant demographic variables, variables that might impact your study design like time of day (i.e. energy levels might fluctuate during the day so that could impact measurement) and lastly, other variables that might impact your study’s outcomes.

Instrumentation

Just like with survey research, it is important to discuss how you are collecting your data, through what instrument or instruments are used, what scales are used, what their reliability and validity are based on past uses (Bouma et al., 2012; Creswell & Creswell, 2018; Bloomfield and Fisher, 2019). Ultimately, some quantitative experimental models may use  data sets that have already been collected like the National Center for Educational Statistics (NCES). In that case, you will be able to discuss the validity and reliability easily as it is well-established. However, if you are collecting your own data, you must discuss in detail what materials are used in the manipulation of variables. For example, you might want to pilot test the experiment so you have a detailed knowledge of the procedure (Bouma et al., 2012; Creswell & Creswell, 2018).

Also, often in experimental design, you don’t want the participants to know which variables are being manipulated or which group they are being assigned to. In order to be sure you are in line with IRB regulations (See IRB section), you want to draft a letter that will be used to explain the procedures and the study’s purpose to the participants (Creswell & Creswell, 2018). If there is any deception used in the study, be sure to check the IRB guidelines to ensure that you have all procedures and documents approved by Kean University’s IRB . Measurement and Data Analysis for Quantitative Methods

It is important to reiterate that there are several kinds of ways to collect data for a quantitative study. The data is always numerical, as opposed to qualitative data, which is largely narrative. The most common data collection methods for quantitative research are:

  • Close-ended surveys
  • Close-ended questionnaires
  • Structured interviews

The data is collected across populations, using a large sample size and then is analyzed using statistical analysis. Then, the results would be generalizable across populations. However, before you collect the data, you need to determine what exactly you are proposing to measure as you choose your variables and you. There are several kinds of statistical measurements in quantitative research. Each has its own purpose and objective. Ultimately, you need to decide if you are going to describe, explain, predict, or control your numerical data.

Quantitative data collection typically means there are a lot of data. Once the data is gathered, it may seem to be messy and disorganized at first. Your job as the researcher is to organize and then make the significance of the data clear. You do this by cleaning your data through “measurements” or scales and then running statistical analysis tests through your statistical analysis software program.

There are several purposes to statistical analysis in a quantitative study, such as (Kumar, 2015):

  • Summarize your data by identifying what is typical and what is atypical within a group.
  • Identify the rank of an individual or entity within a group
  • Demonstrate the relationship between or among variables.
  • Show similarities and differences among groups.
  • Identify any error that is inherent in a sample.
  • Test for significance.
  • Can support you in making inferences about the population being studied.

It is important to know that in order to properly analyze your numerical data, you will need access to statistical analysis software such as SPSS. The OCIS Help Desk website provides information on how to access SPSS under the Remote Learning (Students)  section.

Once you have collected your numerical data, you can run a series of statistical tests on your data, depending on your research questions.

There are four kinds of statistical measurements that you will be able to choose from in order to determine the best statistical tests to be utilized to explore your research inquiry. These measurements are also referred to as scales, and have very particular sets of statistical analysis tools that go along with each kind of scale (Bryman & Cramer, 2009).

Nominal measurements are labels (names, hence nominal) of  specific categories within mutually exclusive populations or treatment groups. These labels delineate non-numerical data such as gender, city of birth, race, ethnicity, or marital status (Bryman & Cramer, 2009; Ong & Puteh, 2017).

Ordinal measurements detail the order in which data is organized and ranked. These measures or scales deal with the greater than( >)compared to those less than (<) within a data set. Again, these are organized (named/ categorized)  and ranked (ordinal), such as class rank, ability level (beginner, intermediate, expert), or Likert scale answers (strongly agree, agree, undecided, disagree, strongly disagree) (Bryman & Cramer, 2009; Ong & Puteh, 2017).  

Interval measurements take data and order them (nominal), rank them (ordinal) and then evenly distribute them in equal intervals. There is also a zero point which is established. deal with equal units where a zero point is established. Interval measurements can be used for height, weight where there would be an absence of one of those variables (Bryman & Cramer, 2009; Ong & Puteh, 2017).

Ratio measurements allow for data to be measured by equal units (interval) and an absolute zero point is established. Here, in ratio measurements, the absolute zero value signifies the absence of the variable. For example, 0 lbs means the absence of weight. Height, weight, temperature are all examples of variables that can be measured through ratio scale (Bryman & Cramer, 2009; Ong & Puteh, 2017).

Bloomfield, J., & Fisher, M. J. (2019). Quantitative research design. Journal of the Australasian Rehabilitation Nurses Association, 22 (2), 27-30. https://doi-org.kean.idm.oclc.org/10.33235/jarna.22.2.27-30

Bouma, G. D., Ling, R., & Wilkinson, L. (2012). The research process (2nd Canadian ed.). Oxford University Press.

Bryman, A., & Cramer, D. (2009). Quantitative data analysis with SPSS 14, 15 & 16: A guide for social scientists. Routledge/Taylor & Francis Group.

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches. Sage.

Fowler, F. J., Jr. (2008). Survey research methods (4th ed.). Sage.

Fowler, F. J., Jr. (2014). The problem with survey research. Contemporary Sociology, 43 (5): 660-662.

Kraemer, H. C., & Blasey, C. (2015). How many subjects?: Statistical power analysis in research. Sage.

Kumar, S. (2015). IRS introduction to research in special and inclusive education. [PowerPoint slides 4, 5, 37, 38, 39,43]. Informační systém Masarykovy univerzity. https://is.muni.cz/el/1441/podzim2015/SP_IRS/

Ong, M. H. A., & Puteh, F. (2017). Quantitative data analysis: Choosing between SPSS, PLS, and AMOS in social science research. International Interdisciplinary Journal of Scientific Research, 3 (1), 14-25.

Sharma, G. (2017). Pros and cons of different sampling techniques. International Journal of Applied Research, 3 (7), 749-752.

  • Last Updated: Jun 29, 2023 1:35 PM
  • URL: https://libguides.kean.edu/ResearchProcessGuide

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

13.2: True experimental design

  • Last updated
  • Save as PDF
  • Page ID 135156

  • Matthew DeCarlo, Cory Cummings, & Kate Agnelli
  • Open Social Work Education

Learning Objectives

Learners will be able to…

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its  internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.

As we discussed in the previous section, a true experiment has a  control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.

Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.

Table 13.1 Experimental research design notations

Pretest and post-test control group design

In  pretest and post-test control group design , participants are given a  pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a  post-test . 

if a research design employs random assignment it must be

Figure 13.1 Pretest and post-test control group design

In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1  denotes the pre-test, X e  denotes the experimental intervention, and O 2  denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.

if a research design employs random assignment it must be

Figure 13.2 Pretest and post-test control group design testing CBT an intervention

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i  denoting treatment as usual (Figure 13.3).

if a research design employs random assignment it must be

Figure 13.3 Pretest and post-test control group design with treatment as usual instead of no treatment

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes  time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.

Post-test only control group design

Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).

if a research design employs random assignment it must be

Figure 13.4 Post-test only control group design

But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the  testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444)\(^1\) (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.

Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.

However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing  time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.

if a research design employs random assignment it must be

Figure 13.5 Solomon four-group design

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

Key Takeaways

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
  • What hypothesis(es) would you test using this true experiment?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 13 February 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomised designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors.

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • A control group that’s given a placebo (no dosage)
  • An experimental group that’s given a low dosage
  • A second experimental group that’s given a high dosage

Random assignment to helps you make sure that the treatment groups don’t differ in systematic or biased ways at the start of the experiment.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • Participants recruited from pubs are placed in the control group
  • Participants recruited from local community centres are placed in the low-dosage experimental group
  • Participants recruited from gyms are placed in the high-dosage group

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym users may tend to engage in more healthy behaviours than people who frequent pubs or community centres, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism, run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalisability of your results, because it helps to ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8,000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • A control group that receives no intervention
  • An experimental group that has a remote team-building intervention every week for a month

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually into a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a die to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomised block design involves placing participants into blocks based on a shared characteristic (e.g., college students vs graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing children and adults or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviours, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers).

These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, February 13). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved 9 April 2024, from https://www.scribbr.co.uk/research-methods/random-assignment-experiments/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, control groups and treatment groups | uses & examples.

Chapter 10 Experimental Research

Experimental research, often considered to be the “gold standard” in research designs, is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalizability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments , conducted in field settings such as in a real organization, and high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic Concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favorably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receives a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the “cause” in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and assures that each unit in the population has a positive chance of being selected into the sample. Random assignment is however a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration. Random selection is related to sampling, and is therefore, more closely related to the external validity (generalizability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

  • History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.
  • Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.
  • Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam. Not conducting a pretest can help avoid this threat.
  • Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.
  • Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.
  • Regression threat , also called a regression to the mean, refers to the statistical tendency of a group’s overall performance on a measure during a posttest to regress toward the mean of that measure rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest was possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-Group Experimental Designs

The simplest true experimental designs are two group designs involving one treatment group and one control group, and are ideally suited for testing the effects of a single independent variable that can be manipulated as a treatment. The two basic two-group designs are the pretest-posttest control group design and the posttest-only control group design, while variations may include covariance designs. These designs are often depicted using a standardized design notation, where R represents random assignment of subjects to groups, X represents the treatment administered to the treatment group, and O represents pretest or posttest observations of the dependent variable (with different subscripts to distinguish between pretest and posttest observations of treatment and control groups).

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

if a research design employs random assignment it must be

Figure 10.1. Pretest-posttest control group design

The effect E of the experimental treatment in the pretest posttest design is measured as the difference in the posttest and pretest scores between the treatment and control groups:

E = (O 2 – O 1 ) – (O 4 – O 3 )

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement (especially if the pretest introduces unusual topics or content).

Posttest-only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

if a research design employs random assignment it must be

Figure 10.2. Posttest only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

E = (O 1 – O 2 )

The appropriate statistical analysis of this design is also a two- group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

Covariance designs . Sometimes, measures of dependent variables may be influenced by extraneous variables called covariates . Covariates are those variables that are not of central interest to an experimental study, but should nevertheless be controlled in an experimental design in order to eliminate their potential effect on the dependent variable and therefore allow for a more accurate detection of the effects of the independent variables of interest. The experimental designs discussed earlier did not control for such covariates. A covariance design (also called a concomitant variable design) is a special type of pretest posttest control group design where the pretest measure is essentially a measurement of the covariates of interest rather than that of the dependent variables. The design notation is shown in Figure 10.3, where C represents the covariates:

if a research design employs random assignment it must be

Figure 10.3. Covariance design

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

if a research design employs random assignment it must be

Factorial Designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each sub-division of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

The most basic factorial design is a 2 x 2 factorial design, which consists of two treatments, each with two levels (such as high/low or present/absent). For instance, let’s say that you want to compare the learning outcomes of two different types of instructional techniques (in-class and online instruction), and you also want to examine whether these effects vary with the time of instruction (1.5 or 3 hours per week). In this case, you have two factors: instructional type and instructional time; each with two levels (in-class and online for instructional type, and 1.5 and 3 hours/week for instructional time), as shown in Figure 8.1. If you wish to add a third level of instructional time (say 6 hours/week), then the second factor will consist of three levels and you will have a 2 x 3 factorial design. On the other hand, if you wish to add a third factor such as group work (present versus absent), you will have a 2 x 2 x 2 factorial design. In this notation, each number represents a factor, and the value of each factor represents the number of levels in that factor.

if a research design employs random assignment it must be

Figure 10.4. 2 x 2 factorial design

Factorial designs can also be depicted using a design notation, such as that shown on the right panel of Figure 10.4. R represents random assignment of subjects to treatment groups, X represents the treatment groups themselves (the subscripts of X represents the level of each factor), and O represent observations of the dependent variable. Notice that the 2 x 2 factorial design will have four treatment groups, corresponding to the four combinations of the two levels of each factor. Correspondingly, the 2 x 3 design will have six treatment groups, and the 2 x 2 x 2 design will have eight treatment groups. As a rule of thumb, each cell in a factorial design should have a minimum sample size of 20 (this estimate is derived from Cohen’s power calculations based on medium effect sizes). So a 2 x 2 x 2 factorial design requires a minimum total sample size of 160 subjects, with at least 20 subjects in each cell. As you can see, the cost of data collection can increase substantially with more levels or factors in your factorial design. Sometimes, due to resource constraints, some cells in such factorial designs may not receive any treatment at all, which are called incomplete factorial designs . Such incomplete designs hurt our ability to draw inferences about the incomplete factors.

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for 3 hours/week of instructional time than for 1.5 hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid Experimental Designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomized bocks design, Solomon four-group design, and switched replications design.

Randomized block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full -time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between treatment group (receiving the same treatment) or control group (see Figure 10.5). The purpose of this design is to reduce the “noise” or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

if a research design employs random assignment it must be

Figure 10.5. Randomized blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs but not in posttest only designs. The design notation is shown in Figure 10.6.

if a research design employs random assignment it must be

Figure 10.6. Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organizational contexts where organizational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

if a research design employs random assignment it must be

Figure 10.7. Switched replication design

Quasi-Experimental Designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organization is used as the treatment group, while another section of the same class or a different organization in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of a certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impact by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

Many true experimental designs can be converted to quasi-experimental designs by omitting random assignment. For instance, the quasi-equivalent version of pretest-posttest control group design is called nonequivalent groups design (NEGD), as shown in Figure 10.8, with random assignment R replaced by non-equivalent (non-random) assignment N . Likewise, the quasi -experimental version of switched replication design is called non-equivalent switched replication design (see Figure 10.9).

if a research design employs random assignment it must be

Figure 10.8. NEGD design

if a research design employs random assignment it must be

Figure 10.9. Non-equivalent switched replication design

In addition, there are quite a few unique non -equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression-discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to treatment or control group based on a cutoff score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardized test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program. The design notation can be represented as follows, where C represents the cutoff score:

if a research design employs random assignment it must be

Figure 10.10. RD design

Because of the use of a cutoff score, it is possible that the observed results may be a function of the cutoff score rather than the treatment, which introduces a new threat to internal validity. However, using the cutoff score also ensures that limited or costly resources are distributed to people who need them the most rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design does not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

if a research design employs random assignment it must be

Figure 10.11. Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data are not available from the same subjects.

if a research design employs random assignment it must be

Figure 10.12. Separate pretest-posttest samples design

Nonequivalent dependent variable (NEDV) design . This is a single-group pre-post quasi-experimental design with two outcome measures, where one measure is theoretically expected to be influenced by the treatment and the other measure is not. For instance, if you are designing a new calculus curriculum for high school students, this curriculum is likely to influence students’ posttest calculus scores but not algebra scores. However, the posttest algebra scores may still vary due to extraneous factors such as history or maturation. Hence, the pre-post algebra scores can be used as a control measure, while that of pre-post calculus can be treated as the treatment measure. The design notation, shown in Figure 10.13, indicates the single group by a single N , followed by pretest O 1 and posttest O 2 for calculus and algebra for the same group of students. This design is weak in internal validity, but its advantage lies in not having to use a separate control group.

An interesting variation of the NEDV design is a pattern matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique, based on the degree of correspondence between theoretical and observed patterns is a powerful way of alleviating internal validity concerns in the original NEDV design.

if a research design employs random assignment it must be

Figure 10.13. NEDV design

Perils of Experimental Research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, many experimental research use inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artifact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if doubt, using tasks that are simpler and familiar for the respondent sample than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a  between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 university  students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This matching is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called  random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence.  Table 6.2  shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this possibility is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a  treatment  is any intervention meant to change people’s behaviour for the better. This  intervention  includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a  treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a  no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A  placebo  is a simulated treatment that lacks any active ingredient or element that should make it effective, and a  placebo effect  is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008) [1] .

Placebo effects are interesting in their own right (see  Note “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works.  Figure 6.2  shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in  Figure 6.2 ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This  difference  is what is shown by a comparison of the two outer bars in  Figure 6.2 .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This disclosure allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999) [2] . There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002) [3] . The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Within-Subjects Experiments

In a  within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive  and  an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book .  However, not all experiments can use a within-subjects design nor would it be desirable to.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A  carryover effect  is an effect of being tested in one condition on participants’ behaviour in later conditions. One type of carryover effect is a  practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This  type of effect  is called a  context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This  knowledge  could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is  counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

An efficient way of counterbalancing is through a Latin square design which randomizes through having equal rows and columns. For example, if you have four treatments, you must have four versions. Like a Sudoku puzzle, no treatment can repeat in a row or column. For four versions of four treatments, the Latin square design would look like:

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the  lack  of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this problem, he asked participants to rate two numbers on how large they were on a scale of 1-to-10 where 1 was “very very small” and 10 was “very very large”.  One group of participants were asked to rate the number 9 and another group was asked to rate the number 221 (Birnbaum, 1999) [4] . Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this  difference  is because participants spontaneously compared 9 with other one-digit numbers (in which case it is  relatively large) and compared 221 with other three-digit numbers (in which case it is relatively  small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This possibility means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this design is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This difficulty is true for many designs that involve a treatment meant to produce long-term change in participants’ behaviour (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often take exactly this type of mixed methods approach.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.
  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g.,  dog ) are recalled better than abstract nouns (e.g.,  truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.
  • Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590. ↵
  • Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press. ↵
  • Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88. ↵
  • Birnbaum, M.H. (1999). How to show that 9>221: Collect judgments in a between-subjects design. Psychological Methods, 4 (3), 243-249. ↵

Research Methods in Psychology Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Book cover

Counteracting Methodological Errors in Behavioral Research pp 39–54 Cite as

Random Assignment

  • Gideon J. Mellenbergh 2  
  • First Online: 17 May 2019

486 Accesses

A substantial part of behavioral research is aimed at the testing of substantive hypotheses. In general, a hypothesis testing study investigates the causal influence of an independent variable (IV) on a dependent variable (DV) . The discussion is restricted to IVs that can be manipulated by the researcher, such as, experimental (E- ) and control (C- ) conditions. Association between IV and DV does not imply that the IV has a causal influence on the DV . The association can be spurious because it is caused by an other variable (OV). OVs that cause spurious associations come from the (1) participant, (2) research situation, and (3) reactions of the participants to the research situation. If participants select their own (E- or C- ) condition or others select a condition for them, the assignment to conditions is usually biased (e.g., males prefer the E-condition and females the C-condition), and participant variables (e.g., participants’ sex) may cause a spurious association between the IV and DV . This selection bias is a systematic error of a design. It is counteracted by random assignment of participants to conditions. Random assignment guarantees that all participant variables are related to the IV by chance, and turns systematic error into random error. Random errors decrease the precision of parameter estimates. Random error variance is reduced by including auxiliary variables into the randomized design. A randomized block design includes an auxiliary variable to divide the participants into relatively homogeneous blocks, and randomly assigns participants to the conditions per block. A covariate is an auxiliary variable that is used in the statistical analysis of the data to reduce the error variance. Cluster randomization randomly assigns clusters (e.g., classes of students) to conditions, which yields specific problems. Random assignment should not be confused with random selection. Random assignment controls for selection bias , whereas random selection makes possible to generalize study results of a sample to the population.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Cox, D. R. (2006). Principles of statistical inference . Cambridge, UK: Cambridge University Press.

Google Scholar  

Hox, J. (2002). Multilevel analysis: Techniques and applications . Mahwah, NJ: Erlbaum.

Lai, K., & Kelley, K. (2012). Accuracy in parameter estimation for ANCOVA and ANOVA contrasts: Sample size planning via narrow confidence intervals. British Journal of Mathematical and Statistical Psychology, 65, 350–370.

PubMed   Google Scholar  

McNeish, D., Stapleton, L. M., & Silverman, R. D. (2017). On the unnecessary ubiquity of hierarchical linear modelling. Psychological Methods, 22, 114–140.

Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94, 423–432.

PubMed   PubMed Central   Google Scholar  

Snijders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis . London, UK: Sage.

van Belle, G. (2002). Statistical rules of thumb . New York, NY: Wiley.

Download references

Author information

Authors and affiliations.

Emeritus Professor Psychological Methods, Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands

Gideon J. Mellenbergh

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Gideon J. Mellenbergh .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Mellenbergh, G.J. (2019). Random Assignment. In: Counteracting Methodological Errors in Behavioral Research. Springer, Cham. https://doi.org/10.1007/978-3-030-12272-0_4

Download citation

DOI : https://doi.org/10.1007/978-3-030-12272-0_4

Published : 17 May 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-74352-3

Online ISBN : 978-3-030-12272-0

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

if a research design employs random assignment it must be

Enago Academy's Most Popular Articles

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Journals Combat Image Manipulation with AI

Science under Surveillance: Journals adopt advanced AI to uncover image manipulation

Journals are increasingly turning to cutting-edge AI tools to uncover deceitful images published in manuscripts.…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

if a research design employs random assignment it must be

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

if a research design employs random assignment it must be

What should universities' stance be on AI tools in research and academic writing?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved April 12, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, what is your plagiarism score.

  • Translators
  • Graphic Designers
  • Editing Services
  • Academic Editing Services
  • Admissions Editing Services
  • Admissions Essay Editing Services
  • AI Content Editing Services
  • APA Style Editing Services
  • Application Essay Editing Services
  • Book Editing Services
  • Business Editing Services
  • Capstone Paper Editing Services
  • Children's Book Editing Services
  • College Application Editing Services
  • College Essay Editing Services
  • Copy Editing Services
  • Developmental Editing Services
  • Dissertation Editing Services
  • eBook Editing Services
  • English Editing Services
  • Horror Story Editing Services
  • Legal Editing Services
  • Line Editing Services
  • Manuscript Editing Services
  • MLA Style Editing Services
  • Novel Editing Services
  • Paper Editing Services
  • Personal Statement Editing Services
  • Research Paper Editing Services
  • Résumé Editing Services
  • Scientific Editing Services
  • Short Story Editing Services
  • Statement of Purpose Editing Services
  • Substantive Editing Services
  • Thesis Editing Services

Proofreading

  • Proofreading Services
  • Admissions Essay Proofreading Services
  • Children's Book Proofreading Services
  • Legal Proofreading Services
  • Novel Proofreading Services
  • Personal Statement Proofreading Services
  • Research Proposal Proofreading Services
  • Statement of Purpose Proofreading Services

Translation

  • Translation Services

Graphic Design

  • Graphic Design Services
  • Dungeons & Dragons Design Services
  • Sticker Design Services
  • Writing Services

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Completely Randomized Design: The One-Factor Approach

David Costello

Completely Randomized Design (CRD) is a research methodology in which experimental units are randomly assigned to treatments without any systematic bias. CRD gained prominence in the early 20th century, largely attributed to the pioneering work of statistician Ronald A. Fisher . His method addressed the inherent variability in experimental units by randomly assigning treatments, thus countering potential biases. Today, CRD serves as an indispensable tool in various domains, including agriculture, medicine, industrial engineering, and quality control analysis.

CRD is particularly favored in situations with limited control over external variables. By leveraging its inherent randomness, CRD neutralizes potentially confounding factors. As a result, each experimental unit has an equal likelihood of receiving any specific treatment, ensuring a level playing field. Such random allocation is pivotal in eliminating systematic bias and bolstering the validity of experimental conclusions.

While CRD may sometimes necessitate larger sample sizes , the improved accuracy and consistency it introduces to results often justify this requirement.

Understanding CRD

At its core, CRD is centered on harnessing randomness to achieve objective experimental outcomes. This approach effectively addresses unanticipated extraneous variables —those not included in the study design but that can still influence the response variable. In the context of CRD, these extraneous variables are expected to be uniformly distributed across treatments, thereby mitigating their potential influence.

A key aspect of CRD is the single-factor experiment. This means that the experiment revolves around changing or manipulating one primary independent variable (or factor) to ascertain its effect on the dependent variable . Consider these examples across different fields:

  • Medical: An experiment might be designed where the independent variable is the dosage of a new drug, and the dependent variable is the speed of patient recovery. Researchers would vary the drug dosage and observe its effect on recovery rates.
  • Agriculture: An agricultural study could alter the amount of water irrigation (independent variable) given to crops and measure the resulting crop yield (dependent variable) to determine the optimal irrigation level.
  • Psychology: A psychologist might introduce different intensities of a visual cue (independent variable) to participants and then measure their reaction times (dependent variable) to understand the cue's influence.
  • Environmental Science: Scientists might introduce different concentrations of a pollutant (independent variable) to a freshwater pond and measure the health and survival rate of aquatic life (dependent variable) in response.
  • Education: In an educational setting, researchers could change the duration of digital learning (independent variable) students receive daily and then observe its effect on test scores (dependent variable) at the end of the term.
  • Engineering: In material science, an experiment might adjust the temperature (independent variable) during the curing process of a polymer and then measure its resultant tensile strength (dependent variable).

For each of these scenarios, only one key factor or independent variable is intentionally varied, while any changes or outcomes in another variable (the dependent variable) are observed and recorded. This distinct focus on a single variable, while keeping all others constant or controlled, underscores the essence of the single-factor experiment in CRD.

Advantages of CRD

Understanding the strengths of Completely Randomized Design is pivotal for effectively applying this research tool and interpreting results accurately. Below is an exploration of the benefits of employing CRD in research studies.

  • Simplicity: One of the most appealing features of CRD is its straightforwardness. Focusing on a single primary factor, CRD is easier to understand and implement compared to more complex research designs.
  • Flexibility: CRD enhances versatility by allowing the inclusion of various experimental units and treatments through random assignment, enabling researchers to explore a range of variables.
  • Robustness: Despite its simplicity, CRD stands as a robust research tool. The consistent use of randomization minimizes biases and uniformly distributes the effects of uncontrolled variables across all groups, contributing to the reliability of the results.
  • Generalizability: Proper application of CRD enables the extension of research findings to a broader population. The minimization of selection bias , thanks to random assignment, increases the probability that the sample closely represents the larger population.

Disadvantages of CRD

While CRD is marked by simplicity, flexibility, robustness, and enhanced generalizability, it is essential to carefully consider its limitations. A thoughtful analysis of these aspects will guide researchers in making informed decisions about the applicability of CRD to their specific research context.

  • Ignoring Nuisance Variables: CRD operates primarily under the assumption that all treatments are equivalent aside from the independent variable. If strong nuisance factors vary systematically across treatments, this assumption becomes a limitation, making CRD less suitable for studies where nuisance variables significantly impact the results.
  • Need for Large Sample Size: The pooling of all experimental units into one extensive set necessitates a larger sample size, potentially leading to increased time, cost, and resource investment.
  • Inefficiency in Some Cases: CRD might demonstrate statistical inefficiency with significant within-treatment group variability . In such cases, other designs that account for this variability may offer enhanced efficiency.

Differentiating CRD from other research design methods

CRD stands out in the realm of research designs due to its foundational simplicity. While its essence lies in the random assignment of experimental units to treatments without any systematic bias, other designs introduce varying layers of complexity tailored to specific experimental needs.

For instance, consider the Randomized Block Design (RBD) . Unlike the straightforward approach of CRD, RBD divides experimental units into homogenous blocks, based on known sources of variability, before assigning treatments. This method is especially useful when there's an identifiable source of variability that researchers wish to control for. Similarly, the Latin Square Design , while also involving random assignment, operates on a grid system to simultaneously control for two lurking variables , adding another dimension of complexity not found in CRD.

Factorial Design investigates the effects and interactions of multiple independent variables. This design can reveal interactions that might be overlooked in simpler designs. Then there's the Crossover Design , often used in medical trials. Unlike CRD, where each unit experiences only one treatment, in Crossover Design, participants receive multiple treatments over different periods, allowing each participant to serve as their own control.

The choice of research design, whether it be CRD, RBD, Latin Square, or any of the other methods available, is fundamentally guided by the nature of the research question , the characteristics of the experimental units, and the specific objectives the study aims to achieve. However, it's the inherent simplicity and flexibility of CRD that often makes it the go-to choice, especially in scenarios with many units or treatments, where intricate stratification or blocking isn't necessary.

Let us further explore the advantages and disadvantages of each method.

While CRD's simplicity and flexibility make it a popular choice for many research scenarios, the optimal design depends on the specific needs, objectives, and contexts of the study. Researchers must carefully consider these factors to select the most suitable research design method.

The role of CRD in mitigating extraneous variables

Within the framework of experimental research, extraneous variables persistently challenge the validity of findings, potentially compromising the established relationship between independent and dependent variables . CRD is a methodological safeguard that systematically addresses these extraneous variables. Below, we describe specific types of extraneous variables and how CRD counteracts their potential influence:

  • Definition: Variables that induce variance in the dependent variable, yet are not of primary academic interest. While they don't muddle the relationship between the primary variables, their presence can augment within-group variability, reducing statistical power.
  • CRD's Countermeasure: Through the mechanism of random assignment, CRD ensures an equitably distributed influence of nuisance variables across all experimental conditions. This distribution, theoretically, leads to mutual nullification of their effects when assessing the efficacy of treatments.
  • Definition: Variables not explicitly incorporated within the study design but can influence its outcomes. Their impact often manifests post-hoc, rendering them alternative explanations for observed phenomena.
  • CRD's Countermeasure: Random assignment intrinsic to CRD assures a uniform distribution of these lurking variables across experimental conditions. This diminishes the probability of them systematically influencing one group, thus safeguarding the experiment's conclusions.
  • Definition: Variables that not only influence the dependent variable but also correlate with the independent variable. Their simultaneous influence can mislead interpretations of causality.
  • CRD's Countermeasure: The tenet of random assignment inherent in CRD ensures an equitable distribution of potential confounders among groups. This bolsters confidence in attributing observed effects predominantly to the experimental treatments.
  • Definition: Deliberately held constant to ensure that they do not introduce variability into the experiment. They are intentionally kept constant to preserve experimental integrity.
  • CRD's Countermeasure: While CRD focuses on randomization, the nature of the design inherently assumes that controlled variables remain constant across all experimental units. By maintaining these constants, CRD ensures that the focus remains solely on the treatment effects, further validating the experiment's findings.

The foundational principle underpinning the Completely Randomized Design—randomization—serves as a bulwark against the influences of extraneous variables. By uniformly distributing these variables across experimental conditions, CRD enhances the validity and reliability of experimental outcomes. However, researchers should exercise caution and continuously evaluate potential extraneous influences, even in randomized designs.

Selecting the independent variable

The selection of the independent variable is crucial for research design . This pivotal step not only shapes the direction and quality of the research but also underpins the understanding of causal relationships within the studied system, influencing the dependent variable or response. When choosing this essential component of experimental design , several critical considerations emerge:

  • Relevance: Paramount to the success of the experiment is the variable's direct relevance to the research query. For instance, in a botanical study of phototropism, the light's intensity or duration would naturally serve as the independent variable.
  • Measurability: The chosen variable should be quantifiable or categorizable, enabling distinctions between its varying levels or types.
  • Controllability: The research environment must allow for steadfast control over the variable, ensuring extraneous influences are kept at bay.
  • Ethical Considerations: In disciplines like social sciences or medical research, it's vital to consider the ethical implications . The chosen variable should withstand ethical scrutiny, safeguarding the well-being and rights of participants.

Identifying the independent variable necessitates a methodical and structured approach where each step aligns with the overarching research objective:

  • Review Literature: Thoroughly review existing literature to provide invaluable insights into past research and highlight unexplored areas.
  • Define the Scope: Clearly delineating research boundaries is crucial. For example, when studying dietary impacts on metabolic health, the variable could span from diet types (like keto, vegan, Mediterranean) to specific nutrients.
  • Determine Levels of the Variable: This involves understanding the various levels or categories the independent variable might have. In educational research, one might look beyond simply "innovative vs. conventional methods" to a broader range of teaching techniques.
  • Consider Potential Outcomes: Anticipating possible outcomes based on variations in the independent variable is beneficial. If potential outcomes seem too vast, the variable might need further refinement.

In academic discourse, while CRD is praised for its rigor and clarity, the effectiveness of the design relies heavily on the meticulous selection of the independent variable. Making this choice with thorough consideration ensures the research offers valuable insights with both academic and wider societal implications.

Applications of CRD

CRD has found wide and varied applications in several areas of research. Its versatility and fundamental simplicity make it an attractive option for scientists and researchers across a multitude of disciplines.

CRD in agricultural research

Agricultural research was among the earliest fields to adopt the use of Completely Randomized Design. The broad application of CRD within agriculture not only encompasses crop improvement but also the systematic analysis of various fertilizers, pesticides, and cropping techniques. Agricultural scientists leverage the CRD framework to scrutinize the effects on yield enhancement and bolstered disease resistance. The fundamental randomization in CRD effectively mitigates the influence of nuisance variables such as soil variations and microclimate differences, ensuring more reliable and valid experimental outcomes.

Additionally, CRD in agricultural research paves the way for robust testing of new agricultural products and methods. The unbiased allocation of treatments serves as a solid foundation for accurately determining the efficacy and potential downsides of innovative fertilizers, genetically modified seeds, and novel pest control methods, contributing to informed decision-making and policy formulation in agricultural development.

However, the limitations of CRD within the agricultural context warrant acknowledgment. While it offers an efficient and straightforward approach for experimental design, CRD may not always capture spatial variability within large agricultural fields adequately. Such unaccounted variations can potentially skew results, underscoring the necessity for employing more intricate experimental designs, such as the Randomized Complete Block Design (RCBD), where necessary. This adaptation enhances the reliability and generalizability of the research findings, ensuring their applicability to real-world agricultural challenges.

CRD in medical research

The fields of medical and health research substantially benefit from the application of Completely Randomized Design, especially in executing randomized control trials. Within this context, participants, whether patients or others, are randomly assigned to either the treatment or control groups. This structured random allocation minimizes the impact of extraneous variables, ensuring that the groups are comparable. It fortifies the assertion that any discernible differences in outcomes are genuinely attributable to the treatment being analyzed, enhancing the robustness and reliability of the research findings.

CRD's randomized nature in medical research allows for a more objective assessment of varied medical treatments and interventions. By mitigating the influence of extraneous variables, researchers can more accurately gauge the effectiveness and potential side effects of novel medical approaches, including pharmaceuticals and surgical techniques. This precision is crucial for the continual advancement of medical science, offering a solid empirical foundation for the refinement of treatments that improve health outcomes and patient quality of life.

However, like other fields, the application of CRD in medical research has its limitations. Despite its effectiveness in controlling various factors, CRD may not always consider the complexity of human health conditions where multiple variables often interact in intricate ways. Hence, while CRD remains a valuable tool for medical research, it is crucial to apply it judiciously and alongside other research designs to ensure comprehensive and reliable insights into medical treatments and interventions.

CRD in industrial engineering

In industrial engineering, Completely Randomized Design plays a significant role in process and product testing, offering a reliable structure for the evaluation and improvement of industrial systems. Engineers often employ CRD in single-factor experiments to analyze the effects of a particular factor on a certain outcome, enhancing the precision and objectivity of the assessment.

For example, to discern the impact of varying temperatures on the strength of a metal alloy, engineers might utilize CRD. In this scenario, the different temperatures represent the single factor, and the alloy samples are randomly allocated to be tested at each designated temperature. This random assignment minimizes the influence of extraneous variables, ensuring that the observed effects on alloy strength are primarily attributable to the temperature variations.

CRD's implementation in industrial engineering also assists in the optimization of manufacturing processes. Through random assignment and structured testing, engineers can effectively evaluate process parameters, such as production speed, material quality, and machine settings. By accurately assessing the influence of these factors on production efficiency and product quality, engineers can implement informed adjustments and enhancements, promoting optimal operational performance and superior product standards. This systematic approach, anchored by CRD, facilitates consistent and robust industrial advancements, bolstering overall productivity and innovation in industrial engineering.

Despite these advantages, it's crucial to acknowledge the limitations of CRD in industrial engineering contexts. The design is efficient for single-factor experiments but may falter with experiments involving multiple factors and interactions, common in industrial settings. This limitation underscores the importance of combining CRD with other experimental designs. Doing so navigates the complex landscape of industrial engineering research, ensuring insights are comprehensive, accurate, and actionable for continuous innovation in industrial operations.

CRD in quality control analysis

Completely Randomized Design is also beneficial in quality control analysis, where ensuring the consistency of products is paramount.

For instance, a manufacturer keen on minimizing product defects may deploy CRD to empirically assess the effectiveness of various inspection techniques. By randomly assigning different inspection methods to identical or similar production batches, the manufacturer can gather data regarding the most effective techniques for identifying and mitigating defects, bolstering overall product quality and consumer satisfaction.

Furthermore, the utility of CRD in quality control extends to the analysis of materials, machinery settings, or operational processes that are pivotal to final product quality. This design enables organizations to rigorously test and compare assorted conditions or settings, ensuring the selection of parameters that optimize both quality and efficiency. This approach to quality analysis not only bolsters the reliability and performance of products but also significantly augments the optimization of organizational resources, curtailing wastage and improving profitability.

However, similar to other CRD applications, it is crucial to understand its limitations. While CRD can significantly aid in the analysis and optimization of various aspects of quality control, its effectiveness may be constrained when dealing with multi-factorial scenarios with complex interactions. In such situations, other experimental designs, possibly in tandem with CRD, might offer more robust and comprehensive insights, ensuring that quality control measures are not only effective but also adaptable to evolving industrial and market demands.

Future applications and emerging fields for CRD

The breadth of applications for Completely Randomized Design continues to expand. Emerging fields such as data science, business analytics, and environmental studies are increasingly recognizing the value of CRD in conducting reliable and uncomplicated experiments. In the realm of data science, CRD can be invaluable in assessing the performance of different algorithms, models, or data processing techniques. It enables researchers to randomize the variables, minimizing biases and providing a clearer understanding of the real-world applicability and effectiveness of various data-centric solutions.

In the domain of business analytics, CRD is paving the way for robust analysis of business strategies and initiatives. Businesses can employ CRD to randomly assign strategies or processes across various departments or teams, allowing for a comprehensive assessment of their impact. The insights from such assessments empower organizations to make data-driven decisions, optimizing their operations, and enhancing overall productivity and profitability. This approach is particularly crucial in the business environment of today, characterized by rapid changes, intense competition, and escalating customer expectations, where informed and timely decision-making is a key determinant of success.

Moreover, in environmental studies, CRD is increasingly being used to evaluate the impact of various factors on environmental health and sustainability. For example, researchers might use CRD to study the effects of different pollutants, conservation strategies, or land use patterns on ecosystem health. The randomized design ensures that the conclusions drawn are robust and reliable, providing a solid foundation for the development of policies and initiatives. As environmental concerns continue to mount, the role of reliable experimental designs like CRD in facilitating meaningful research and informed policy-making cannot be overstated.

Planning and conducting a CRD experiment

A CRD experiment involves meticulous planning and execution, outlined in the following structured steps. Each phase, from the preparatory steps to data collection and analysis, plays a pivotal role in bolstering the integrity and success of the experiment, ensuring that the findings stand as a valuable contribution to scientific knowledge and understanding.

  • Selecting Participants in a Random Manner: The heart of a CRD experiment is randomness. Regardless of whether the subjects are human participants, animals, plants, or objects, their selection must be truly random. This level of randomness ensures that every participant has an equal likelihood of being assigned to any treatment group, which plays a crucial role in eliminating selection bias.
  • Understanding and Selecting the Independent Variable: This is the variable of interest – the one that researchers aim to manipulate to observe its effects. Identifying and understanding this factor is pivotal. Its selection depends on the experiment's primary research question or hypothesis , and its clear definition is essential to ensuring the experiment's clarity and success.
  • The Process of Random Assignment in Experiments: Following the identification of subjects and the independent variable, researchers must randomly allocate subjects to various treatment groups. This process, known as random assignment, typically involves using random number generators or other statistical tools , ensuring that the principle of randomness is upheld.
  • Implementing the Single-factor Experiment: After random assignment, researchers can launch the main experiment. At this stage, they introduce the independent variable to the designated treatment groups, ensuring that all other conditions remain consistent across groups. The goal is to make certain that any observed effect or change is attributed only to the manipulation of the independent variable.
  • Data Cleaning and Preparation: The first step post-collection is to prepare and clean the data . This process involves rectifying errors, handling missing or inconsistent data, and eradicating duplicates. Employing tools like statistical software or languages such as Python and R can be immensely helpful. Handling outliers and maintaining consistency throughout the dataset is essential for accurate subsequent analysis.
  • Statistical Analysis Methods: The next step involves analyzing the data using appropriate statistical methodologies, dependent on the nature of the data and research questions . Analysis can range from basic descriptive statistics to complex inferential statistics or even advanced statistical modeling.
  • Interpreting the Results: Analysis culminates in the interpretation of results, wherein researchers draw conclusions based on the statistical outcomes. This stage is crucial in CRD, as it determines if observed effects can be attributed to the independent variable's manipulation or if they occurred purely by chance. Apart from statistical significance, the practical implications and relevance of the results also play a vital role in determining the experiment's success and potential real-world applications.

Navigating common challenges in CRD

While the Completely Randomized Design offers numerous advantages, researchers often encounter specific challenges when implementing it in real-world experiments. Recognizing these challenges early and being prepared with strategies to address them can significantly improve the integrity and success of the CRD experiment. Let's delve into some of the most common challenges and explore potential solutions:

  • Lack of Homogeneity: One foundational assumption of CRD is the homogeneity of experimental units . However, in reality, there may be inherent variability among units. To mitigate this, researchers can use stratified sampling or consider employing a randomized block design.
  • Improper Randomization: The essence of CRD is randomization. However, it's not uncommon for some researchers to inadvertently introduce biases during the assignment. Utilizing computerized random number generators or statistical software can help ensure true randomization.
  • Limited Number of Experimental Units: Sometimes, the available experimental units might be fewer than required for a robust experiment. In such cases, using a larger number of replications can help, albeit at the cost of increased resources.
  • Extraneous Variables: These external factors can influence the outcome of an experiment. They make it hard to attribute observed effects solely to the independent variable. Careful experimental design, pre-experimental testing, and post-experimental analysis can help identify and control these extraneous variables.
  • Overlooking Practical Significance: Even if a CRD experiment yields statistically significant results, these might not always be practically significant. Researchers need to assess the real-world implications of their findings, considering factors like cost, feasibility, and the magnitude of observed effects.
  • Data-related Challenges: From missing data to outliers, data-related issues may skew results. Regular data cleaning, rigorous validation, and employing robust statistical methods can help address these challenges.

While CRD is a powerful tool in experimental research, its successful implementation hinges on the researcher's ability to anticipate, recognize, and navigate challenges that might arise. By being proactive and employing strategies to mitigate potential pitfalls, researchers can maximize the reliability and validity of their CRD experiments, ensuring meaningful and impactful results.

In summary, the Completely Randomized Design holds a pivotal place in the field of research owing to its simplicity and straightforward approach. Its essence lies in the unbiased random assignment of experimental units to various treatments, ensuring the reliability and validity of the results. Although it may not control for other variables and often requires larger sample sizes, its ease of implementation frequently outweighs these drawbacks, solidifying it as a preferred choice for researchers across many fields.

Looking ahead, the future of CRD remains bright. As research continues to evolve, we anticipate the integration of CRD with more sophisticated design techniques and advanced analytical tools. This synergy will likely enhance the efficiency and applicability of CRD in varied research contexts, perpetuating its legacy as a fundamental research design method. While other designs might offer more control and complexity, the fundamental simplicity of CRD will continue to hold significant value in the rapidly evolving research landscape.

Moving forward, it is imperative to champion continuous learning and exploration in the field of CRD. Engaging in educational opportunities, staying abreast of the latest research and advancements, and actively participating in pertinent discussions and forums can markedly enrich understanding and expertise in CRD. Embracing this ongoing learning journey will not only bolster individual research skills but also make a significant contribution to the broader scientific community, fueling innovation and discovery in numerous fields of study.

Header image by Alex Shuper .

Related Posts

The Write Stuff

The Write Stuff

Need to Make Your Essay Longer? Here's How

Need to Make Your Essay Longer? Here's How

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Advance your scientific manuscript with expert editing

Advance your scientific manuscript with expert editing

COMMENTS

  1. 1.5: Type of Research Designs

    1.5: Type of Research Designs. Research studies come in many forms, and, just like with the different types of data we have, different types of studies tell us different things. The choice of research design is determined by the research question and the logistics involved. Though a complete understanding of different research designs is the ...

  2. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  3. Experimental research

    Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results ...

  4. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  5. 8.1 Experimental design: What is it and when should it be used?

    Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups ...

  6. 1.1: Research Designs

    Other considerations. In addition to using random assignment, you should avoid introducing confounding variables into your experiments. Confounding variables are things that could undermine your ability to draw causal inferences. For example, if you wanted to test if a new happy pill will make people happier, you could randomly assign participants to take the happy pill or not (the independent ...

  7. Random Assignment in Experiments

    Random sampling is a process for obtaining a sample that accurately represents a population. Random assignment uses a chance process to assign subjects to experimental groups. Using random assignment requires that the experimenters can control the group assignment for all study subjects. For our study, we must be able to assign our participants ...

  8. When do you use random assignment?

    Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there's usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable. In general, you should always use random assignment in this type of experimental design when ...

  9. 14.2 True experiments

    A true experiment, often considered to be the "gold standard" in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables (as treatments) are manipulated by the researcher, subjects are randomly assigned (i.e., random assignment) to different treatment levels, and ...

  10. Step 6a

    Experimental design is largely different from survey research because there is a requirement of random sampling and randomization of assignment. According to Sharma (2017), there are several kinds of sampling methods that can be used in quantitative design.

  11. PDF EXPERIMENTAL RESEARCH DESIGNS

    ARTHUR—PSYC 302 (EXPERIMENTAL PSYCHOLOGY) 19A LECTURE NOTES [02/23/19] EXPERIMENTAL RESEARCH DESIGNS—PAGE 5 WITHIN-SUBJECTS, BETWEEN-SUBJECTS, AND MIXED FACTORIAL DESIGNS 1. Within-subjects design—a research design in which each participant experiences every condition of the experiment or study. A. Advantages 1. do not need as many participants 2. equivalence is certain

  12. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  13. Research Design

    Research Methods. There are three major categories of research methods: (1) quantitative, (2) qualitative, and (3) mixed methods. 1. Quantitative. Addresses what questions. Utilizes numerical data (e.g., surveys, systems) Primarily deductive. Used to test hypotheses. Involves statistical analyses.

  14. What is random assignment?

    In this research design, there's usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable. In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

  15. 13.2: True experimental design

    True experimental design is best suited for explanatory research questions. True experiments require random assignment of participants to control and experimental groups. Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention. Post-test only research design involves only one point ...

  16. Experimental Research Design

    The design of research is fraught with complicated and crucial decisions. Researchers must decide which research questions to address, which theoretical perspective will guide the research, how to measure key constructs reliably and accurately, who or what to sample and observe, how many people/places/things need to be sampled in order to achieve adequate statistical power, and which data ...

  17. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  18. Chapter 10 Experimental Research

    Many true experimental designs can be converted to quasi-experimental designs by omitting random assignment. For instance, the quasi-equivalent version of pretest-posttest control group design is called nonequivalent groups design (NEGD), as shown in Figure 10.8, with random assignment R replaced by non-equivalent (non-random) assignment N .

  19. Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  20. Random Assignment

    Random assignment should not be confused with random selection. Random assignment controls for selection bias , whereas random selection makes possible to generalize study results of a sample to the population. Keywords. Cluster randomization; Covariate; Cross-over design; Independent and dependent variables; Random assignment and random selection

  21. Experimental Research Designs: Types, Examples & Advantages

    1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

  22. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  23. Completely Randomized Design: The One-Factor Approach

    Completely Randomized Design (CRD) is a research methodology in which experimental units are randomly assigned to treatments without any systematic bias. CRD gained prominence in the early 20th century, largely attributed to the pioneering work of statistician Ronald A. Fisher. His method addressed the inherent variability in experimental units by randomly assigning treatments, thus countering ...