Data Manipulation and Analysis in Stata

16 hypothesis tests, 16.1 for two categorical variables.

A common procedure is to test for an association between two categorical variables. We can illustrate this procedure using a tabulation with a \(\chi{}^2\) statistic. (Of course, the variable sex is not necessarily binary valued.)

16.2 Exercise

Create the table with \(\chi{}^2\) statistic and expected values as above. Should you reject the H0 that sex and class are not associated?

16.3 For one continuous and one categorical variable of two levels

If we have one continuous numeric variable and one two level categorical variable (such as employed vs unemployed) that would divide our data into two groups, we can ask ourselves whether the mean of the continuous variable differs for the groups (with H0 being that they do not).

If our two groups are independent, then we must first ask if the variance in the data is more or less equal between groups. The null hypothesis is that the variances are equal. This is tested by the comparison of variances using Stata’s robvar command. We can test the maths scores by sex in our data

Knowing whether or not we are dealing with groups displaying (more or less) equal variance in the variable of interest, we can go on to conduct an independent samples t-test . The code is

(Assuming that we have interpreted the results of robvar to mean the variance in maths for the two groups is equal).

16.4 Exercise

Run the robvar procedure above but for the history and sex variables. What are the three W statistics produced? Which of them tests that the variances are equal for a comparison of means? Is there strong enough evidence in this case to reject the null hypothesis?

Use the ttest command to test the null hypothesis that

\[\mu{}\ english _{\ female \ students} = \mu \ english _{\ male\ students}\]

What conclusion do you draw?

16.5 The paired samples ttest

We can also compare the same group of subjects on two measures to see if the means differ. In this case there is no need to check the variances before conducting the test. For example we could test whether or not mean scores in English and History differ (with the null hypothesis that they do not)

Using this procedure, how do English scores compare to History scores and how do English scores compare to Mathematics scores?

16.6 Once continuous and one categorical variable of more than two levels

We can compare the level avxm by teacher , this is to say test the null hypothesis

\[\mu{}\ avxm \ _{teacher \ one} = \mu{}\ avxm \ _{teacher \ two} = \mu{}\ avxm \ _{teacher \ three} \]

16.6.1 One way ANOVA and post-hoc testing

The Stata command to test the null hypothesis above is

This command produces summary statistics the ANOVA statistic F , its associated probability, and other quantities calculated as part of the ANOVA. In the version given above, we have included a tabulation of pairwise comparisons using the bonferroni correction. We can separately examine the pairwise comparisons if we wish with

This method does not display the ANOVA table itself and the mcompare() option gives us access to a slightly different range of correction options.

16.7 Two continuous variables

16.8 correlation.

Analysis of two continuous variables begins with calculating the Pearson Correlation Coefficient: R . This statistic ranges from

  • -1 indicating an inverse or negative correlation
  • 0 indicating no correlation
  • +1 indicating a positive correlation

We should take note that a correlation has not only magnitude and direction, but that there is an associated hypothesis test: the the true correlation is 0. This test gives a p value associated with R .

The code to compute R in Stata is

This computes R for var1 and var2 . If you do not specify a variable list, Stata computes correlations between all non-string variables in your data set.

16.9 Exercise

Compute Pearson correlations with significance values for the pairs

  • english-maths
  • english-history

Explain to your learning partner what the results mean to you.

16.9.1 Simple visualisation of correlation

The simplest way to visualise a correlation is with a scatter plot. You may wish to consider, based on your plans for further analysis which variable you wish to assign to which axis. To create a scatter plot you can start with

To add the trend line:

And add a confidence interval:

Now you can add labels, titles and so on

Stata has a very large range of graphing commands and options. While they are reasonably complicated, a good way to explore them is through this gallery .

16.10 Exercise

Using any resources you can find, try to find more Stata graph schemes and try at least three on the code above.

World Bank Blogs Logo

An updated overview of multiple hypothesis testing commands in Stata

David mckenzie.

Just over a year ago, I wrote a blog post comparing different user-written Stata packages for conducting multiple hypothesis test corrections in Stata. Several of the authors of those packages have generously upgraded the commands to introduce more flexibility and cover more use cases, and so I thought I would provide an updated post that discusses the current versions. I’m also providing a sample dataset and do file that shows how I implemented the different commands. I again acknowledge my gratitude to the authors of these commands who have provided these public goods.

What is the problem we are trying to deal with here?

Suppose we have run an experiment with four treatments ( treat1, treat2, treat3, treat4 ), and are interested in examining impacts on a range of outcomes ( y1, y2, y3, y4, y5 ). In my empirical example here, the outcomes are firm survival (y1), and four different types of business practice indices (y2, y3, y4, and y5) and the treatments are different ways of helping firms learn new practices.

We run the following treatment regressions for outcome j:

Y(j) = a + b1*treat1+b2*treat2+b3*treat3+b4*treat4 + c1*y(j,0) + d’X + e(j)

Where here we have an Ancova specification, so control for the baseline value of the outcome variable in each regression y(j,0) – except for y1 (firm survival) since all firms were alive at time 0 and so there is no variation in the baseline value for this outcome; and control for randomization strata (the X’s here).

With 5 outcomes and 4 treatments, we have 20 hypothesis tests and thus 20 p-values, as shown in the table below:

MHT Figure 1

Suppose that none of the treatments have any effect on any outcome (all null hypotheses are true), and that the outcomes are independent. Then if we just test the hypotheses one by one, then the probability that of one or more false rejections when using a critical value of 0.05 is 1-0.95^20 = 64% (and using a critical value of 0.10 is 88%. As a result, in order to reduce the likelihood of these false rejections, we want some way of adjusting for the fact that we are testing multiple hypotheses. That is what these different methods do.

Approaches for Controlling the False Discovery Rate (FDR): Anderson’s sharpened q-values

One of the most popular ways to deal with this issue is to use Michael Anderson’s code to compute sharpened False Discovery Rate (FDR) q-values. The FDR is the expected proportion of rejections that are type I errors (false rejections). Anderson discusses this procedure here .

This code is very easy to use. You need to just save the p-values and then read them as data into Stata, and run his code to get the sharpened q-values. For my little example, they are shown in the table below.

MHT Figure 2

A few points to note:

·        A key reason for the popularity of this approach (in addition to its simplicity) is seen in the comparison of the original p-values to sharpened q-values. If we were to apply a Bonferroni adjustment, we would multiply the p-values by the number of outcomes (20) and then cap at 1.000. Then, for example, the p-value of 0.031 for outcome Y3 and treatment 1 would be adjusted to 0.62. Instead the sharpened q-value is 0.091. That is, this approach has a lot greater power than many other methods.

·        Because this takes the p-values as inputs, you have a lot of flexibility into what goes into each treatment regression: for example, you can have some regressions with clustered errors and some without, some with controls and some without, etc.

·        As Anderson notes in his code, sharpened q-values can actually be LESS than unadjusted p-values in some cases when many hypotheses are rejected, because if there are many true rejections, you can tolerate several false rejections too and still maintain the false discovery rate low.    We see an example of this for outcome Y1 and treatment 1 here.

·        A drawback of this method is that it does not account for any correlations among the p-values. For example, in my application, if a treatment improves business practices Y2, then we might think it is likely to have improved business practices Y3, Y4, and Y5. Anderson notes that in simulations the method seems to also work well with positively dependent p-values, but if the p-values have negative correlations, a more conservative approach is needed.

Approaches for Controlling the Familywise Error Rate (FWER)

An alternative to controlling the FDR is to control the familywise error rate (FWER), which is the probability of making any type I error. Many readers will be familiar with the Bonferroni correction noted above, which controls the FWER by adjusting the p-values by the number of tests. So if your p-value is 0.043, and you have done 20 tests, your adjusted p-value will be 20*0.043 = 0.86. As you can see, when the number of tests gets large, this correction can massively increase your p-values. Moreover, since it does not take account of any dependence among the outcomes, it can be way too conservative. All the methods I discuss here instead use a bootstrapping or re-sampling approach to incorporate information about the joint dependence structure of the different tests, thereby allowing p-values to be correlated and the adjusted p-values to be less conservative.

The table below adds FWER-adjusted p-values from four of these commands mhtexp, mhtreg, wyoung, and rwolf2 , all based on 3,000 bootstrap replications. I discuss each of these commands in turn, but you will see that:

·        All four methods give pretty similar adjusted p-values to one another here, with the small differences partly reflecting whether the method allows for different control variables to be used in each equation (discussed below).

·        While not as large as Bonferroni p-values would be, you can see the adjusted p-values are much larger than the original p-values and also typically quite a bit larger than the sharpened q-values. This reflects the issue of using FWER with many comparisons – in order to avoid making any type I error, the adjustments become increasingly severe as you add more and more outcomes or treatments. That is, power becomes low. In contrast, the FDR approach is willing to accept some type I error in exchange for more power. Which is more suitable depends on how costly false rejections are versus power to examine particular effects.

MHT Figure 3

Let me then discuss the specifics of each of these four commands:

mhtexp (and mhtexp2)

This code implements a procedure set out by John List, Azeem Shaikh and Yang Xu (2016) , and can be obtained by typing ssc install mhtexp

List, Shaikh and Atom Vayalinka (2021) have an updated paper and code for mhtexp2 that builds in adjustment for baseline covariates using the Lin approach. It builds on work by Romano and Wolf, and uses a bootstrapping approach to incorporate information about the joint dependence structure of the different tests – that is, it allows for the p-values to be correlated.

·        This command allows for multiple outcomes and multiple treatments, but mhtexp does not allow for the inclusion of control variables (so no controlling for baseline values of the outcome of interest, or for randomization strata fixed effects), and does not allow for clustering of standard errors. mhtexp2 does allow for the inclusion of control variables, but currently requires them to be the same for every equation, and still does not allow for clustering of standard errors.

·        It assumes you are running simple treatment regressions, so does not allow for multiple testing adjustments coming from using e.g. ivreg, reghdfe, rdrobust, etc.

·        The command is then straightforward. Here I have created a variable treatment which takes value 0 for the control group, 1 for treat 1, 2 for treat 2, 3 for treat 3, and 4 for treat 4. Then the command is:

mhtexp Y1 Y2 Y3 Y4 Y5, treatment(treatment) bootstrap(3000)

So note that my mhtexp p-values above are from treatment regressions that don’t incorporate randomization strata fixed effects or the baseline values of the control variables.

This code was written by Andreas Steinmayr to extend the mhtexp command to allow for the inclusion of different controls in different regressions, and for clustered randomization.

To get this: type

ssc install mhtreg

A couple of limitations to be aware of:

·        The syntax is a bit awkward with multiple treatments – it only does corrections for the first regressor in each equation, so if you want to test for multiple treatments, you have to repeat the regression and change the order in which treatments are listed. E.g. to test the 20 different outcomes in my example, the code is:

MHT Figure 4

·        It requires each treatment regression to actually be a regression – that is, to have used the reg command, and not commands like areg or reghdfe; or for you to be doing ivreg or probits, or something else – so you will need to add any fixed effects as regressors, and better be doing ITT and not TOT or other non-regression estimation.

This is one of the two commands that have improved the most since my first post, and it now does pretty much everything you would like it to do. This command, programmed by Julian Reif, calculates Westfall-Young stepdown adjusted p-values, which also control the FWER and allow for dependence amongst p-values. Documentation and latest updates are in the Github repository . It implements the Westfall-Young method, which uses bootstrap resampling to allow for dependence across outcomes. This method is a precursor to the Romano-Wolf procedure that the other three commands I note here are based on. Romano and Wolf note that the Westfall-Young procedure requires an additional assumption of subset pivotality, which can be violated in certain settings, and so the Romano-Wolf procedure is more general.   For multiple test correction with OLS and experimental analysis, this assumption should hold fine, and as we see in my example, both methods give similar results here.

To get this command, type

net install wyoung, from(" https://raw.githubusercontent.com/reifjulian/wyoung/master ") replace

·        A nice feature of this command is that it allows you to have different control variables in different equations. So you can control for the baseline variable of each outcome, for example. It also allows for clustered randomization and can do bootstrap re-sampling that accounts for both randomization strata and clustered assignment.

·        The command allows for different Stata commands in different equations – so you could have one equation be estimated using IV, another using reghdfe, another with a regression discontinuity, etc

·        The updates made now allow for multiple treatments and for a much cleaner syntax for including different control variables in different regressions. One point to note is that if there is an equation where you do not want to include a control, but you do want to include controls in others, the current command seems to require a hack where you just create a constant as the control variable for the equations where you do not want controls.

gen constant=1

wyoung Y1 Y2 Y3 Y4 Y5, cmd(areg OUTCOMEVAR treat1 treat2 treat3 treat4 CONTROLVARS, r a(strata)) familyp(treat1 treat2 treat3 treat4) controls("constant" "b_Y2" "b_Y3" "b_Y4" "b_Y5") bootstraps(1000) seed(123);

#delimit cr

This command calculates Romano-Wolf stepdown adjusted p-values, which control the FWER and allows for dependence among p-values by bootstrap resampling. The original rwolf command was developed by Romano and Wolf along with Damian Clarke. It is the command that I think has improved the most with the recent update by Damian Clarke to rwolf2 , described here . The updates expand the capabilities of the command to deal with what I saw as the main limitations of the original command, and it now allows for multiple treatments, different commands, different controls in different regressions, and also allows for clustered standard errors. Here is an example of the syntax:

MHT Figure 5

Since this now does everything that the Westfall-Young command does, while been able to handle cases where subset pivotality does not hold, then this currently seems the theoretically best option for FWER correction at the moment.

Testing the null of complete irrelevance: randcmd

This is a Stata command written by Alwyn Young that calculates randomization inference p-values, based on his recent QJE paper . It is doing something different to the above approaches. Rather than adjusting each individual p-value for multiple testing, it conducts a joint test of the sharp hypothesis that no treatment has any effect, and then uses the Westfall-Young approach to test this across equations. So in my example, it tests that there is no effect of any treatment on outcome Y1 (p=0.403), Y2 (p=0.0.045) etc., and then also tests the null of complete irrelevance – that no treatment had no effect on any outcome (p=0.022 here). The command is very flexible in allowing for each equation to have different controls, different samples, having clustered standard errors, etc. But it is testing a different hypothesis than the other approaches above.

MHT Figure 6

Putting it altogether

The table below summarizes the different multiple hypothesis testing commands. With a small number of hypothesis tests, controlling for the FWER is useful, and then both rwolf2 and wyoung do everything you need. With lots of outcomes and treatments, controlling for the FDR is likely to be preferred in many economics applications, and so the Anderson q-value approach is my stand-by. The Young omnibus test of overall significance is a useful compliment, but answers a different question.

MHT Figure 7

Lead Economist, Development Research Group, World Bank

Join the Conversation

  • Share on mail
  • comments added

Stata for Students: t-tests

This article is part of the Stata for Students series. If you are new to Stata we strongly recommend reading all the articles in the Stata Basics section.

t-tests are frequently used to test hypotheses about the population mean of a variable. The command to run one is simply ttest , but the syntax will depend on the hypothesis you want to test. In this section we'll discuss the following types of tests:

The Population Mean is Equal to Some Specified Value

One type of hypothesis simply asks whether the population mean of a variable is equal to some particular value of interest. This is called a single-sample t-test, because you look at the entire sample at once.

The Population Means for Two Variables are the Same

Another type of hypothesis looks at whether two variables have the same population mean. This is called a paired-sample t-test, because the test assumes that the values of the two variables for the same observation go together (i.e. the value of X for observation 1 has a relationship to the value of Y for observation 1 that does not exist between the value of X for observation 1 and the value of Y for observation 2).

The Population Means for Two Subsamples are the Same

The final type of hypothesis we'll consider is whether two groups have the same population mean for a single variable. This is called a two-sample t-test, and is the most common.

For all these tests we've described the null hypothesis. Usually the null hypothesis is the opposite of what you're really interested in. For example, if you're investigating differences between men and women in the mean education level, your null hypothesis will usually be that they are the same. Your alternative hypothesis could then be one of the following: that the mean education level of women is higher than the mean education level of men, that the mean education level of men is higher than the mean education level of women, or that the mean levels of education are different regardless of which is higher.

Stata will report results for all three alternative hypotheses, but you should choose which one you're interested in ahead of time. Looking at the results and then picking the alternative hypothesis that matches what you'd like to see will increase the probability of drawing the wrong conclusion from the test.

We will discuss the interpretation of the t-test in detail for the first type of hypothesis (that the mean is equal to a specified value) but the discussion applies to all the hypotheses a t-test can test.

If you plan to carry out the examples in this article, make sure you've downloaded the GSS sample to your U:\SFS folder as described in Managing Stata Files . Then create a do file called ttests.do in that folder that loads the GSS sample as described in Doing Your Work Using Do Files . If you plan on applying what you learn directly to your homework, create a similar do file but have it load the data set used for your assignment.

Hypothesis: The Population Mean is Equal to Some Specified Value

Suppose you want to test the hypothesis that the population mean of educ is 14 years. The syntax is simply:

ttest educ=14

This gives the output:

The mean of educ in the sample , which is also the best estimate of the population mean, is 13.38. But in order to evaluate the hypothesis that mean is really 14, you have to consider the uncertainty about that estimate. The 95% confidence interval ranges from 12.97 to 13.80, which does not include 14, so it's not looking good for our null hypothesis.

Formal evaluation compares the null hypothesis ( Ho ), that the mean is 14, with one of three alternative hypotheses ( Ha ): that the mean is less than 14, that the mean is not equal to 14 but could be bigger or smaller, and that the mean is greater than 14. You must pick the alternative hypothesis you're interested in testing before running the test.

First consider Ha: mean < 14 . If the population mean is 14, then the probability of drawing a sample with a mean of 13.38 or less, given the number of observations we have and the standard deviation we observe, is 0.0018 (i.e. it's extremly unlikely). This is less than .05, so we reject the null hypothesis that the mean is 14 in favor of the alternative that the mean is less than 14.

Next consider Ha: mean != 14 . If the population mean is 14, then the probability of drawing a sample that is at least 14 - 13.38 = 0.62 away from that mean in either direction is 0.0037 (again, given the number of observations we have and the standard deviation we observe). This is exactly twice the probability of the previous hypothesis, though this is obscured by rounding. The previous hypothesis was a one-tail test (i.e. looking at the probability that the outcome is out in one of the "tails" of the probability distribution) while this is a two-tail test (i.e. looking at the probability that the outcome is in either tail of the distribution). Again the probability is less than 0.05, so we reject the null hypothesis that the mean is 14 in favor of the alternative hypothesis that the mean is something other than 14.

Finally consider Ha: mean > 14 . If the population mean is 14, then the probability of drawing a sample with a mean that is 13.38 or greater is 0.9982 (i.e. it's almost certain). This probability is nowhere near less than 0.05, so in this case we accept the null hypothesis that the mean is 14 rather than the alternative that the mean is greater than 14.

Changing the Confidence Level

If you want to consider a different confidence level, use the level() option with the desired confidence level in the parentheses:

ttest educ=14, level(90)

This produces:

The only change is that you are given a 90% confidence interval rather than a 95% confidence interval. The true mean will fall into this interval 90% of the time rather than 95% of the time like in the prior results, so this interval is slightly smaller.

Hypothesis: The Population Means for Two Variables are the Same

Suppose you wanted to test the hypotheses that the population mean for the respondent's father's education ( paeduc ) is the same as the population mean for the respondent's mother's education ( maeduc ). This is a paired sample test because the mother and father of the same respondent are related. To do this, run:

ttest paeduc=maeduc

Stata calculated the difference ( diff ) between the two means as maeduc - paeduc , so the alternative hypothesis mean(diff) < 0 is also the hypothesis that paeduc is greater than maeduc . In this case the probabilities associated with all three alternative hypotheses are well above 0.05, so no matter which alternative hypothesis you chose to test you would accept the null hypothesis that the means are the same. More precisely, we do not have sufficient evidence to reject the hypothesis that they are same. It's possible we could reject that hypothesis if we had more observations, for example.

Hypothesis: The Population Means for Two Subsamples are the Same

Suppose you wanted to test the hypothesis that the population mean of educ is the same for men and women. To do this, run:

ttest educ, by(sex)

diff is defined as mean(male) - mean(female) , so the alternative hypothesis diff < 0 is also the hypothesis that the mean of educ for females is greater than the mean of educ for males. All the probabilities are well above 0.05, so once again no matter which alternative hypothesis you chose to test you will not reject the null hypothesis that the mean level of education for males and females is the same.

Note that this test assumed that the population variance of educ was the same for males and females. We can see from the output that the standard deviation (which is the square root of the variance) is slightly higher for males in the sample. If we think that difference is real, we can tell the ttest command to take it into account by adding the unequal option:

ttest educ, by(sex) unequal

In this case it makes very little difference.

Complete Do File

The following is a complete do file for this section.

capture log close log using ttests.log, replace clear all set more off use gss_sample ttest educ=14 ttest educ=14, level(90) ttest paeduc=maeduc ttest educ, by(sex) ttest educ, by(sex) unequal log close

Last Revised: 9/2/2016

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Institute for Digital Research and Education

What statistical analysis should I use? Statistical analyses using Stata

Version info: Code for this page was tested in Stata 12.

Introduction

This page shows how to perform a number of statistical tests using Stata. Each section gives a brief description of the aim of the statistical test, when it is used, an example showing the Stata commands and Stata output with a brief interpretation of the output. You can see the page Choosing the Correct Statistical Test for a table that shows an overview of when each test is appropriate to use.  In deciding which test is appropriate to use, it is important to consider the type of variables that you have (i.e., whether your variables are categorical, ordinal or interval and whether they are normally distributed), see What is the difference between categorical, ordinal and interval variables? for more information on this.

About the hsb data file

Most of the examples in this page will use a data file called hsb2, high school and beyond.  This data file contains 200 observations from a sample of high school students with demographic information about the students, such as their gender ( female ), socio-economic status ( ses ) and ethnic background ( race ). It also contains a number of scores on standardized tests, including tests of reading ( read ), writing ( write ), mathematics ( math ) and social studies ( socst ).  You can get the hsb2 data file from within Stata by typing:

One sample t-test

A one sample t-test allows us to test whether a sample mean (of a normally distributed interval variable) significantly differs from a hypothesized value.  For example, using the hsb2 data file , say we wish to test whether the average writing score ( write ) differs significantly from 50.  We can do this as shown below.

The mean of the variable write for this particular sample of students is 52.775, which is statistically significantly different from the test value of 50.  We would conclude that this group of students has a significantly higher mean on the writing test than 50.

  • Stata Code Fragment: Descriptives, ttests, Anova and Regression
  • Stata Class Notes: Analyzing Data

One sample median test

A one sample median test allows us to test whether a sample median differs significantly from a hypothesized value.  We will use the same variable, write , as we did in the one sample t-test example above, but we do not need to assume that it is interval and normally distributed (we only need to assume that write is an ordinal variable and that its distribution is symmetric).  We will test whether the median writing score ( write ) differs significantly from 50.

The results indicate that the median of the variable write for this group is statistically significantly different from 50.

Binomial test

A one sample binomial test allows us to test whether the proportion of successes on a two-level categorical dependent variable significantly differs from a hypothesized value.  For example, using the hsb2 data file , say we wish to test whether the proportion of females ( female ) differs significantly from 50%, i.e., from .5.  We can do this as shown below.

The results indicate that there is no statistically significant difference (p = .2292).  In other words, the proportion of females does not significantly differ from the hypothesized value of 50%.

Chi-square goodness of fit

A chi-square goodness of fit test allows us to test whether the observed proportions for a categorical variable differ from hypothesized proportions.  For example, let’s suppose that we believe that the general population consists of 10% Hispanic, 10% Asian, 10% African American and 70% White folks.  We want to test whether the observed proportions from our sample differ significantly from these hypothesized proportions. To conduct the chi-square goodness of fit test, you need to first download the csgof program that performs this test.  You can download csgof from within Stata by typing search csgof (see How can I used the search command to search for programs and get additional help? for more information about using search ).

Now that the csgof program is installed, we can use it by typing:

These results show that racial composition in our sample does not differ significantly from the hypothesized values that we supplied (chi-square with three degrees of freedom = 5.03, p = .1697).

  • Useful Stata Programs

Two independent samples t-test

An independent samples t-test is used when you want to compare the means of a normally distributed interval dependent variable for two independent groups.  For example, using the hsb2 data file , say we wish to test whether the mean for write is the same for males and females.

The results indicate that there is a statistically significant difference between the mean writing score for males and females (t = -3.7341, p = .0002).  In other words, females have a statistically significantly higher mean score on writing (54.99) than males (50.12).

  • Stata Learning Module: A Statistical Sampler in Stata

Wilcoxon-Mann-Whitney test

The Wilcoxon-Mann-Whitney test is a non-parametric analog to the independent samples t-test and can be used when you do not assume that the dependent variable is a normally distributed interval variable (you only assume that the variable is at least ordinal).  You will notice that the Stata syntax for the Wilcoxon-Mann-Whitney test is almost identical to that of the independent samples t-test.  We will use the same data file (the hsb2 data file ) and the same variables in this example as we did in the independent t-test example above and will not assume that write , our dependent variable, is normally distributed.

The results suggest that there is a statistically significant difference between the underlying distributions of the write scores of males and the write scores of females (z = -3.329, p = 0.0009).  You can determine which group has the higher rank by looking at the how the actual rank sums compare to the expected rank sums under the null hypothesis. The sum of the female ranks was higher while the sum of the male ranks was lower.  Thus the female group had higher rank.

  • FAQ: Why is the Mann-Whitney significant when the medians are equal?

Chi-square test

A chi-square test is used when you want to see if there is a relationship between two categorical variables.  In Stata, the chi2 option is used with the tabulate command to obtain the test statistic and its associated p-value.  Using the hsb2 data file , let’s see if there is a relationship between the type of school attended ( schtyp ) and students’ gender ( female ). Remember that the chi-square test assumes the expected value of each cell is five or higher.  This assumption is easily met in the examples below. However, if this assumption is not met in your data, please see the section on Fisher’s exact test below.

These results indicate that there is no statistically significant relationship between the type of school attended and gender (chi-square with one degree of freedom = 0.0470, p = 0.828).

Let’s look at another example, this time looking at the relationship between gender ( female ) and socio-economic status ( ses ).  The point of this example is that one (or both) variables may have more than two levels, and that the variables do not have to have the same number of levels.  In this example, female has two levels (male and female) and ses has three levels (low, medium and high).

Again we find that there is no statistically significant relationship between the variables (chi-square with two degrees of freedom = 4.5765, p = 0.101).

  • Stata Teaching Tools: Probability Tables
  • Stata Teaching Tools: Chi-squared distribution
  • Stata Textbook Examples: An Introduction to Categorical Analysis, Chapter 2

Fisher’s exact test

The Fisher’s exact test is used when you want to conduct a chi-square test, but one or more of your cells has an expected frequency of five or less.  Remember that the chi-square test assumes that each cell has an expected frequency of five or more, but the Fisher’s exact test has no such assumption and can be used regardless of how small the expected frequency is. In the example below, we have cells with observed frequencies of two and one, which may indicate expected frequencies that could be below five, so we will use Fisher’s exact test with the exact option on the tabulate command.

These results suggest that there is not a statistically significant relationship between race and type of school (p = 0.597). Note that the Fisher’s exact test does not have a “test statistic”, but computes the p-value directly.

  • Stata Textbook Examples: Statistical Methods for the Social Sciences, Chapter 7

One-way ANOVA

A one-way analysis of variance (ANOVA) is used when you have a categorical independent variable (with two or more categories) and a normally distributed interval dependent variable and you wish to test for differences in the means of the dependent variable broken down by the levels of the independent variable.  For example, using the hsb2 data file , say we wish to test whether the mean of write differs between the three program types ( prog ).  The command for this test would be:

The mean of the dependent variable differs significantly among the levels of program type.  However, we do not know if the difference is between only two of the levels or all three of the levels.  (The F test for the Model is the same as the F test for prog because prog was the only variable entered into the model.  If other variables had also been entered, the F test for the Model would have been different from prog .) To see the mean of write for each level of program type, you can use the tabulate command with the summarize option, as illustrated below.

From this we can see that the students in the academic program have the highest mean writing score, while students in the vocational program have the lowest.

  • Design and Analysis: A Researchers Handbook Third Edition by Geoffrey Keppel
  • Stata Frequently Asked Questions
  • Stata Programs for Data Analysis

Kruskal Wallis test

The Kruskal Wallis test is used when you have one independent variable with two or more levels and an ordinal dependent variable. In other words, it is the non-parametric version of ANOVA and a generalized form of the Mann-Whitney test method since it permits 2 or more groups.  We will use the same data file as the one way ANOVA example above (the hsb2 data file ) and the same variables as in the example above, but we will not assume that write is a normally distributed interval variable.

If some of the scores receive tied ranks, then a correction factor is used, yielding a slightly different value of chi-squared.  With or without ties, the results indicate that there is a statistically significant difference among the three type of programs.

Paired t-test

A paired (samples) t-test is used when you have two related observations (i.e. two observations per subject) and you want to see if the means on these two normally distributed interval variables differ from one another. For example, using the hsb2 data file we will test whether the mean of read is equal to the mean of write .

These results indicate that the mean of read is not statistically significantly different from the mean of write (t = -0.8673, p = 0.3868).

  • Stata Learning Module: Comparing Stata and SAS Side by Side

Wilcoxon signed rank sum test

The Wilcoxon signed rank sum test is the non-parametric version of a paired samples t-test.  You use the Wilcoxon signed rank sum test when you do not wish to assume that the difference between the two variables is interval and normally distributed (but you do assume the difference is ordinal). We will use the same example as above, but we will not assume that the difference between read and write is interval and normally distributed.

The results suggest that there is not a statistically significant difference between read and write .

If you believe the differences between read and write were not ordinal but could merely be classified as positive and negative, then you may want to consider a sign test in lieu of sign rank test.  Again, we will use the same variables in this example and assume that this difference is not ordinal.

This output gives both of the one-sided tests as well as the two-sided test.  Assuming that we were looking for any difference, we would use the two-sided test and conclude that no statistically significant difference was found (p=.5565).

McNemar test

You would perform McNemar’s test if you were interested in the marginal frequencies of two binary outcomes. These binary outcomes may be the same outcome variable on matched pairs (like a case-control study) or two outcome variables from a single group.  For example, let us consider two questions, Q1 and Q2, from a test taken by 200 students. Suppose 172 students answered both questions correctly, 15 students answered both questions incorrectly, 7 answered Q1 correctly and Q2 incorrectly, and 6 answered Q2 correctly and Q1 incorrectly. These counts can be considered in a two-way contingency table.  The null hypothesis is that the two questions are answered correctly or incorrectly at the same rate (or that the contingency table is symmetric). We can enter these counts into Stata using mcci , a command from Stata’s epidemiology tables. The outcome is labeled according to case-control study conventions.

McNemar’s chi-square statistic suggests that there is not a statistically significant difference in the proportions of correct/incorrect answers to these two questions.

One-way repeated measures ANOVA

You would perform a one-way repeated measures analysis of variance if you had one categorical independent variable and a normally distributed interval dependent variable that was repeated at least twice for each subject.  This is the equivalent of the paired samples t-test, but allows for two or more levels of the categorical variable. This tests whether the mean of the dependent variable differs by the categorical variable.  We have an example data set called rb4 , which is used in Kirk’s book Experimental Design.  In this data set, y is the dependent variable, a is the repeated measure and s is the variable that indicates the subject number.

You will notice that this output gives four different p-values.  The “regular” (0.0001) is the p-value that you would get if you assumed compound symmetry in the variance-covariance matrix.  Because that assumption is often not valid, the three other p-values offer various corrections (the Huynh-Feldt, H-F, Greenhouse-Geisser, G-G and Box’s conservative, Box).  No matter which p-value you use, our results indicate that we have a statistically significant effect of a at the .05 level.

  • Stata FAQ: How can I test for nonadditivity in a randomized block ANOVA in Stata?
  • Stata Textbook Examples, Experimental Design, Chapter 7
  • Stata Code Fragment: ANOVA

Repeated measures logistic regression

If you have a binary outcome measured repeatedly for each subject and you wish to run a logistic regression that accounts for the effect of these multiple measures from each subjects, you can perform a repeated measures logistic regression.  In Stata, this can be done using the xtgee command and indicating binomial as the probability distribution and logit as the link function to be used in the model. The exercise data file contains 3 pulse measurements of 30 people assigned to 2 different diet regiments and 3 different exercise regiments. If we define a “high” pulse as being over 100, we can then predict the probability of a high pulse using diet regiment.

First, we use xtset to define which variable defines the repetitions.  In this dataset, there are three measurements taken for each id , so we will use id as our panel variable. Then we can use i: before diet so that we can create indicator variables as needed.

These results indicate that diet is not statistically significant (Z = 1.24, p = 0.216).

Factorial ANOVA

A factorial ANOVA has two or more categorical independent variables (either with or without the interactions) and a single normally distributed interval dependent variable.  For example, using the hsb2 data file we will look at writing scores ( write ) as the dependent variable and gender ( female ) and socio-economic status ( ses ) as independent variables, and we will include an interaction of female by ses .  Note that in Stata, you do not need to have the interaction term(s) in your data set.  Rather, you can have Stata create it/them temporarily by placing an asterisk between the variables that will make up the interaction term(s).

These results indicate that the overall model is statistically significant (F = 5.67, p = 0.001).  The variables female and ses are also statistically significant (F = 16.59, p = 0.0001 and F = 6.61, p = 0.0017, respectively).  However, that interaction between female and ses is not statistically significant (F = 0.13, p = 0.8753).

  • Stata Textbook Examples, Experimental Design, Chapter 9

Friedman test

You perform a Friedman test when you have one within-subjects independent variable with two or more levels and a dependent variable that is not interval and normally distributed (but at least ordinal).  We will use this test to determine if there is a difference in the reading, writing and math scores.  The null hypothesis in this test is that the distribution of the ranks of each type of score (i.e., reading, writing and math) are the same. To conduct the Friedman test in Stata, you need to first download the friedman program that performs this test.  You can download friedman from within Stata by typing search friedman (see How can I used the search command to search for programs and get additional help? for more information about using search ).  Also, your data will need to be transposed such that subjects are the columns and the variables are the rows.  We will use the xpose command to arrange our data this way.

Friedman’s chi-square has a value of 0.6175 and a p-value of 0.7344 and is not statistically significant.  Hence, there is no evidence that the distributions of the three types of scores are different.

Ordered logistic regression

Ordered logistic regression is used when the dependent variable is ordered, but not continuous.  For example, using the hsb2 data file we will create an ordered variable called write3 .  This variable will have the values 1, 2 and 3, indicating a low, medium or high writing score.  We do not generally recommend categorizing a continuous variable in this way; we are simply creating a variable to use for this example.  We will use gender ( female ), reading score ( read ) and social studies score ( socst ) as predictor variables in this model.

The results indicate that the overall model is statistically significant (p < .0000), as are each of the predictor variables (p < .000).  There are two cutpoints for this model because there are three levels of the outcome variable.

One of the assumptions underlying ordinal logistic (and ordinal probit) regression is that the relationship between each pair of outcome groups is the same.  In other words, ordinal logistic regression assumes that the coefficients that describe the relationship between, say, the lowest versus all higher categories of the response variable are the same as those that describe the relationship between the next lowest category and all higher categories, etc.  This is called the proportional odds assumption or the parallel regression assumption.  Because the relationship between all pairs of groups is the same, there is only one set of coefficients (only one model).  If this was not the case, we would need different models (such as a generalized ordered logit model) to describe the relationship between each pair of outcome groups.  To test this assumption, we can use either the omodel command ( search omodel , see How can I used the search command to search for programs and get additional help? for more information about using search ) or the brant command. We will show both below.

Both of these tests indicate that the proportional odds assumption has not been violated.

  • Stata FAQ: In ordered probit and logit, what are the cut points?
  • Stata Annotated Output: Ordered logistic regression

Factorial logistic regression

A factorial logistic regression is used when you have two or more categorical independent variables but a dichotomous dependent variable.  For example, using the hsb2 data file we will use female as our dependent variable, because it is the only dichotomous (0/1) variable in our data set; certainly not because it common practice to use gender as an outcome variable.  We will use type of program ( prog ) and school type ( schtyp ) as our predictor variables.  Because prog is a categorical variable (it has three levels), we need to create dummy codes for it.  The use of i . prog does this.  You can use the logit command if you want to see the regression coefficients or the logistic command if you want to see the odds ratios.

The results indicate that the overall model is not statistically significant (LR chi2 = 3.15, p = 0.6774).  Furthermore, none of the coefficients are statistically significant either.  We can use the test command to get the test of the overall effect of prog as shown below.  This shows that the overall effect of prog is not statistically significant.

Likewise, we can use the testparm command to get the test of the overall effect of the prog by schtyp interaction, as shown below.  This shows that the overall effect of this interaction is not statistically significant.

If you prefer, you could use the logistic command to see the results as odds ratios, as shown below.

Correlation

A correlation is useful when you want to see the linear relationship between two (or more) normally distributed interval variables.  For example, using the hsb2 data file we can run a correlation between two continuous variables, read and write .

In the second example, we will run a correlation between a dichotomous variable, female , and a continuous variable, write . Although it is assumed that the variables are interval and normally distributed, we can include dummy variables when performing correlations.

In the first example above, we see that the correlation between read and write is 0.5968.  By squaring the correlation and then multiplying by 100, you can determine what percentage of the variability is shared.  Let’s round 0.5968 to be 0.6, which when squared would be .36, multiplied by 100 would be 36%.  Hence read shares about 36% of its variability with write .  In the output for the second example, we can see the correlation between write and female is 0.2565. Squaring this number yields .06579225, meaning that female shares approximately 6.5% of its variability with write .

  • Annotated Stata Output: Correlation
  • Stata Teaching Tools
  • Stata Class Notes: Exploring Data

Simple linear regression

Simple linear regression allows us to look at the linear relationship between one normally distributed interval predictor and one normally distributed interval outcome variable.  For example, using the hsb2 data file , say we wish to look at the relationship between writing scores ( write ) and reading scores ( read ); in other words, predicting write from read .

We see that the relationship between write and read is positive (.5517051) and based on the t-value (10.47) and p-value (0.000), we would conclude this relationship is statistically significant.  Hence, we would say there is a statistically significant positive linear relationship between reading and writing.

  • Regression With Stata: Chapter 1 – Simple and Multiple Regression
  • Stata Annotated Output: Regression
  • Stata Textbook Examples: Regression with Graphics, Chapter 2
  • Stata Textbook Examples: Applied Regression Analysis, Chapter 5

Non-parametric correlation

A Spearman correlation is used when one or both of the variables are not assumed to be normally distributed and interval (but are assumed to be ordinal). The values of the variables are converted in ranks and then correlated.  In our example, we will look for a relationship between read and write .  We will not assume that both of these variables are normal and interval .

The results suggest that the relationship between read and write (rho = 0.6167, p = 0.000) is statistically significant.

Simple logistic regression

Logistic regression assumes that the outcome variable is binary (i.e., coded as 0 and 1).  We have only one variable in the hsb2 data file that is coded 0 and 1, and that is female .  We understand that female is a silly outcome variable (it would make more sense to use it as a predictor variable), but we can use female as the outcome variable to illustrate how the code for this command is structured and how to interpret the output.  The first variable listed after the logistic (or logit ) command is the outcome (or dependent) variable, and all of the rest of the variables are predictor (or independent) variables.  You can use the logit command if you want to see the regression coefficients or the logistic command if you want to see the odds ratios.  In our example, female will be the outcome variable, and read will be the predictor variable.  As with OLS regression, the predictor variables must be either dichotomous or continuous; they cannot be categorical.

The results indicate that reading score ( read ) is not a statistically significant predictor of gender (i.e., being female), z = -0.75, p = 0.453.  Likewise, the test of the overall model is not statistically significant, LR chi-squared 0.56, p = 0.4527.

  • Stata Textbook Examples: Applied Logistic Regression (2nd Ed) Chapter 1
  • Stata Web Books: Logistic Regression in Stata
  • Stata Data Analysis Example: Logistic Regression
  • Annotated Stata Output: Logistic Regression Analysis
  • Stata FAQ: How do I interpret odds ratios in logistic regression?
  • Stata Library
  • Teaching Tools: Graph Logistic Regression Curve

Multiple regression

Multiple regression is very similar to simple regression, except that in multiple regression you have more than one predictor variable in the equation.  For example, using the hsb2 data file we will predict writing score from gender ( female ), reading, math, science and social studies ( socst ) scores.

The results indicate that the overall model is statistically significant (F = 58.60, p = 0.0000).  Furthermore, all of the predictor variables are statistically significant except for read .

  • Regression with Stata: Lesson 1 – Simple and Multiple Regression
  • Annotated Output: Multiple Linear Regression
  • Stata Textbook Examples: Applied Linear Statistical Models
  • Stata Textbook Examples: Regression Analysis by Example, Chapter 3

Analysis of covariance

Analysis of covariance is like ANOVA, except in addition to the categorical predictors you also have continuous predictors as well.  For example, the one way ANOVA example used write as the dependent variable and prog as the independent variable.  Let’s add read as a continuous variable to this model, as shown below.

The results indicate that even after adjusting for reading score ( read ), writing scores still significantly differ by program type ( prog ) F = 5.87, p = 0.0034.

  • Stata Textbook Examples: Design and Analysis, Chapter 14

Multiple logistic regression

Multiple logistic regression is like simple logistic regression, except that there are two or more predictors.  The predictors can be interval variables or dummy variables, but cannot be categorical variables.  If you have categorical predictors, they should be coded into one or more dummy variables. We have only one variable in our data set that is coded 0 and 1, and that is female .  We understand that female is a silly outcome variable (it would make more sense to use it as a predictor variable), but we can use female as the outcome variable to illustrate how the code for this command is structured and how to interpret the output.  The first variable listed after the logistic (or logit ) command is the outcome (or dependent) variable, and all of the rest of the variables are predictor (or independent) variables.  You can use the logit command if you want to see the regression coefficients or the logistic command if you want to see the odds ratios.  In our example, female will be the outcome variable, and read and write will be the predictor variables.

These results show that both read and write are significant predictors of female .

  • Stata Annotated Output: Logistic Regression
  • Stata Web Books: Logistic Regression with Stata
  • Stata Textbook Examples: Applied Logistic Regression, Chapter 2
  • Stata Textbook Examples: Applied Regression Analysis, Chapter 8
  • Stata Textbook Examples: Introduction to Categorical Analysis, Chapter 5
  • Stata Textbook Examples: Regression Analysis by Example, Chapter 12

Discriminant analysis

Discriminant analysis is used when you have one or more normally distributed interval independent variables and a categorical dependent variable.  It is a multivariate technique that considers the latent dimensions in the independent variables for predicting group membership in the categorical dependent variable.  For example, using the hsb2 data file , say we wish to use read , write and math scores to predict the type of program a student belongs to ( prog ). For this analysis, you need to first download the daoneway program that performs this test. You can download daoneway from within Stata by typing search daoneway (see How can I used the search command to search for programs and get additional help? for more information about using search ).

You can then perform the discriminant function analysis like this.

Clearly, the Stata output for this procedure is lengthy, and it is beyond the scope of this page to explain all of it.  However, the main point is that two canonical variables are identified by the analysis, the first of which seems to be more related to program type than the second.

  • Stata Data Analysis Examples: Discriminant Function Analysis

One-way MANOVA

MANOVA (multivariate analysis of variance) is like ANOVA, except that there are two or more dependent variables. In a one-way MANOVA, there is one categorical independent variable and two or more dependent variables. For example, using the hsb2 data file , say we wish to examine the differences in read , write and math broken down by program type ( prog ). For this analysis, you can use the manova command and then perform the analysis like this.

This command produces three different test statistics that are used to evaluate the statistical significance of the relationship between the independent variable and the outcome variables.  According to all three criteria, the students in the different programs differ in their joint distribution of read , write and math . See also

  • Stata Data Analysis Examples: One-way MANOVA
  • Stata Annotated Output: One-way MANOVA
  • Stata FAQ: How can I do multivariate repeated measures in Stata?

Multivariate multiple regression

Multivariate multiple regression is used when you have two or more dependent variables that are to be predicted from two or more predictor variables.  In our example, we will predict write and read from female , math , science and social studies ( socst ) scores.

Many researchers familiar with traditional multivariate analysis may not recognize the tests above. They do not see Wilks’ Lambda, Pillai’s Trace or the Hotelling-Lawley Trace statistics, the statistics with which they are familiar. It is possible to obtain these statistics using the mvtest command written by David E. Moore of the University of Cincinnati. UCLA updated this command to work with Stata 6 and above.  You can download mvtest from within Stata by typing search mvtest (see How can I used the search command to search for programs and get additional help? for more information about using search ).

Now that we have downloaded it, we can use the command shown below.

These results show that female has a significant relationship with the joint distribution of write and read .  The mvtest command could then be repeated for each of the other predictor variables.

  • Regression with Stata: Chapter 4, Beyond OLS
  • Stata Data Analysis Examples: Multivariate Multiple Regression
  • Stata Textbook Examples, Econometric Analysis, Chapter 16

Canonical correlation

Canonical correlation is a multivariate technique used to examine the relationship between two groups of variables.  For each set of variables, it creates latent variables and looks at the relationships among the latent variables. It assumes that all variables in the model are interval and normally distributed.  Stata requires that each of the two groups of variables be enclosed in parentheses.  There need not be an equal number of variables in the two groups.

The output above shows the linear combinations corresponding to the first canonical correlation.  At the bottom of the output are the two canonical correlations.  These results indicate that the first canonical correlation is .7728.  You will note that Stata is brief and may not provide you with all of the information that you may want.  Several programs have been developed to provide more information regarding the analysis.  You can download this family of programs by typing search cancor (see How can I used the search command to search for programs and get additional help? for more information about using search ).

Because the output from the cancor command is lengthy, we will use the cantest command to obtain the eigenvalues, F-tests and associated p-values that we want.  Note that you do not have to specify a model with either the cancor or the cantest commands if they are issued after the canon command.

The F-test in this output tests the hypothesis that the first canonical correlation is equal to zero.  Clearly, F = 56.4706 is statistically significant.  However, the second canonical correlation of .0235 is not statistically significantly different from zero (F = 0.1087, p = 0.7420).

  • Stata Data Analysis Examples: Canonical Correlation Analysis
  • Stata Annotated Output: Canonical Correlation Analysis
  • Stata Textbook Examples: Computer-Aided Multivariate Analysis, Chapter 10

Factor analysis

Factor analysis is a form of exploratory multivariate analysis that is used to either reduce the number of variables in a model or to detect relationships among variables.  All variables involved in the factor analysis need to be continuous and are assumed to be normally distributed. The goal of the analysis is to try to identify factors which underlie the variables.  There may be fewer factors than variables, but there may not be more factors than variables.  For our example, let’s suppose that we think that there are some common factors underlying the various test scores.  We will first use the principal components method of extraction (by using the pc option) and then the principal components factor method of extraction (by using the pcf option).  This parallels the output produced by SAS and SPSS.

Now let’s rerun the factor analysis with a principal component factors extraction method and retain factors with eigenvalues of .5 or greater. Then we will use a varimax rotation on the solution.

Note that by default, Stata will retain all factors with positive eigenvalues; hence the use of the mineigen option or the factors(#) option.  The factors(#) option does not specify the number of solutions to retain, but rather the largest number of solutions to retain.  From the table of factor loadings, we can see that all five of the test scores load onto the first factor, while all five tend to load not so heavily on the second factor.  Uniqueness (which is the opposite of commonality) is the proportion of variance of the variable (i.e., read ) that is not accounted for by all of the factors taken together, and a very high uniqueness can indicate that a variable may not belong with any of the factors.  Factor loadings are often rotated in an attempt to make them more interpretable.  Stata performs both varimax and promax rotations.

The purpose of rotating the factors is to get the variables to load either very high or very low on each factor.  In this example, because all of the variables loaded onto factor 1 and not on factor 2, the rotation did not aid in the interpretation.  Instead, it made the results even more difficult to interpret.

To obtain a scree plot of the eigenvalues, you can use the greigen command.  We have included a reference line on the y-axis at one to aid in determining how many factors should be retained.

  • Stata Annotated Output: Factor Analysis
  • Stata Textbook Examples, Regression with Graphics, Chapter 8

Your Name (required)

Your Email (must be a valid email for us to receive the report!)

Comment/Error Report (required)

How to cite this page

  • © 2021 UC REGENTS

Statology

Statistics Made Easy

How to Perform a Two Sample t-test in Stata

A  two sample t-test  is used to test whether or not the means of two populations are equal.

This tutorial explains how to conduct a two sample t-test in Stata.

Example: Two Sample t-test in Stata

Researchers want to know if a new fuel treatment leads to a change in the average mpg of a certain car. To test this, they conduct an experiment in which 12 cars receive the new fuel treatment and 12 cars do not.

Perform the following steps to conduct a two sample t-test to determine if there is a difference in average mpg between these two groups.

Step 1: Load the data.

First, load the data by typing use http://www.stata-press.com/data/r13/fuel3  in the command box and clicking Enter.

Two sample t-test in Stata example

Step  2: View the raw data.

Before we perform a two sample t-test, let’s first view the raw data. Along the top menu bar, go to  Data > Data Editor > Data Editor (Browse) . The first column,  mpg , shows the mpg for a given car. The second column,  treated , indicates whether or not the car received the fuel treatment (0 = no, 1 = yes).

View raw data in Stata

Step 3: Visualize the data.

Next, let’s visualize the data. We’ll create   boxplots   to view the distribution of mpg values for each group.

Along the top menu bar, go to  Graphics > Box plot . Under variables, choose mpg :

hypothesis testing stata code

Then, in the Categories subheading under Grouping variable, choose treated :

hypothesis testing stata code

Click  OK . A chart with two boxplots will automatically be displayed:

Side by side boxplots in Stata

Right away we can see that the mpg appears to be higher for the treated group (1) compared to the non-treated group (0), but we need to conduct a two-sample t-test to see if these differences are statistically significant.

Step 4: Perform a two sample t-test.

Along the top menu bar, go to Statistics > Summaries, tables, and tests > Classical tests of hypotheses > t test (mean-comparison test) .

Choose  Two-sample using groups . For Variable name, choose  mpg . For Group variable name, choose  treated . For Confidence level, choose any level you’d like. A value of 95 corresponds to a significance level of 0.05. We will leave this at 95. Lastly, click OK .

Two-sample t-test example in Stata

The results of the two sample t-test will be displayed:

Two sample t-test in Stata interpretation

We are given the following information for each group:

Obs:  The number of observations. There are 12 observations in each group.

Mean:  The mean mpg. In group 0, the mean is 21. In group 1, the mean is 22.75.

Std. Err: The standard error, calculated as σ / √ n

Std. Dev:  The standard deviation of mpg.

95% Conf. Interval:  The 95% confidence interval for the true population mean of mpg.

t:  The test statistic of the two-sample t-test.

degrees of freedom:  The degrees of freedom to be used for the test, calculated as n-2 = 24-2 = 22.

The p-values for three different two sample t-tests are displayed at the bottom of the results. Since we are interested in understanding if the average mpg is simply different between the two groups, we will look at the results of the middle test (in which the alternative hypothesis is Ha: diff !=0) which has a p-value of  0.1673 .

Since this value is not smaller than our significance level of 0.05, we fail to reject the null hypothesis. We do not have sufficient evidence to say that the true mean mpg is different between the two groups.

Step 5: Report the results.

Lastly, we will report the results of our two sample t-test. Here is an example of how to do so:

A two sample t-test was conducted on 24 cars to determine if a new fuel treatment lead to a difference in mean miles per gallon. Each group contained 12 cars.   Results showed that the mean mpg was  not  different between the two groups (t = -1.428 w/ df=22, p = .1673) at a significance level of 0.05.   A 95% confidence interval for the true difference in population means resulted in the interval of (-4.29, .79).

' src=

Published by Zach

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

A GUIDE TO APPLIED STATISTICS WITH STATA

' src=

Written by:

Ylva B Almquist

Let us return to the matter of statistical significance: what is it really?

Well, for example, if we find that cats are smarter than dogs, we want to know whether this difference is “real”. Hypothesis testing is how we may answer that question.

We start by converting the question into two hypotheses:

HKT Consultant

  • Entrepreneurship
  • Growth of firm
  • Sales Management
  • Retail Management
  • Import – Export
  • International Business
  • Project Management
  • Production Management
  • Quality Management
  • Logistics Management
  • Supply Chain Management
  • Human Resource Management
  • Organizational Culture
  • Information System Management
  • Corporate Finance
  • Stock Market
  • Office Management
  • Theory of the Firm
  • Management Science
  • Microeconomics
  • Research Process
  • Experimental Research
  • Research Philosophy
  • Management Research
  • Writing a thesis
  • Writing a paper
  • Literature Review
  • Action Research
  • Qualitative Content Analysis
  • Observation
  • Phenomenology
  • Statistics and Econometrics
  • Questionnaire Survey
  • Quantitative Content Analysis
  • Meta Analysis

Hypothesis Tests with Linear Regression by using Stata

Two types ofhypothesis tests appear in regress output tables. As with other common hypothesis tests, they begin from the assumption that observations in the sample at hand were drawn randomly and independently from an infinitely large population.

  • Overall F test: The F statistic at the upper right in the regression table evaluates the null hypothesis that in the population, coefficients on all the model’s x variables equal zero.
  • Individual t tests: The third and fourth columns of the regression table contain t tests for each individual regression coefficient. These evaluate the null hypotheses that in the population, the coefficient on each particular x variable equals zero.

The t test probabilities are two-sided. For one-sided tests, divide these p-values in half.

In addition to these standard F and t tests, Stata can perform F tests of user-specified hypotheses. The test command refers back to the most recently fitted model such as anova or regress. Returning to our four-predictor regression example, suppose we with to test the null hypothesis that both adfert and chldmort (considered jointly) have zero effect.

hypothesis testing stata code

While the individual null hypotheses point in opposite directions (effect of chldmort significant, adfert not), the joint hypothesis that coefficients on chldmort and adfert both equal zero can reasonably be rejected (p < .00005). Such tests on subsets of coefficients are useful when we have several conceptually related predictors or when individual coefficient estimates appear unreliable due to multicollinearity.

test could duplicate the overall F test:

hypothesis testing stata code

test also could duplicate the individual-coefficient tests. Regarding the coefficient on school, for example, the F statistic obtained by test equals the square of the t statistic in the regression table, 2.25 = (-1.50) 2 , and yields exactly the same p-value:

hypothesis testing stata code

Applications of test more useful in advanced work (although not meaningful for the life- expectancy example at hand) include the following.

  • Test whether a coefficient equals a specified constant. For example, to test the null hypothesis that the coefficient on school equals 1 (H 0 😛 1 = 1), instead of testing the usual null hypothesis that it equals 0 (H 0 😛 1 = 0), type

. test school = 1

  • Test whether two coefficients are equal. For example, the following command evaluates the null hypothesis H 0 😛 2 = P 3

. test loggdp = adfert

  • Finally, test understands some algebraic expressions. We could request something like the following, which would test H 0 😛 2 = (P 3 + P 4 ) / 100

. test school = (loggdp + adfert)/100

Consult help test for more information and examples.

Source: Hamilton Lawrence C. (2012), Statistics with STATA: Version 12 , Cengage Learning; 8th edition.

23 Sep 2022

28 Sep 2022

1 thoughts on “ Hypothesis Tests with Linear Regression by using Stata ”

' src=

I haven?¦t checked in here for some time because I thought it was getting boring, but the last few posts are good quality so I guess I will add you back to my daily bloglist. You deserve it my friend 🙂

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Username or email address  *

Password  *

Log in Remember me

Lost your password?

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

gabors-data-analysis/da-coding-stata

Folders and files, repository files navigation, coding for data analysis with stata.

Introduction to Data Analysis with Stata - lecture materials by László Tõkés (CUB) with Ágoston Reguly (Georgia Tech) and Gábor Békés ( CEU , KRTK , CEPR )

This course material is a supplement to Data Analysis for Business, Economics, and Policy by Gábor Békés (CEU) and Gábor Kézdi (U. Michigan), Cambridge University Press, 2021.

Textbook information: see the textbook's website gabors-data-analysis.com or visit Cambridge University Press

To get a copy: Inspection copy for instructors or buy from Amazon or order online around the globe

Acknowledgments

We thank CEU Department of Econimics and Business for financial support.

This is version 1.0. (2022-10-03)

Comments are really welcome in email or as a GitHub issue.

About this lecture series

This series of lectures offers a brief introduction to Stata, containing 13+1 lectures, including a summary lecture. The course serves as an introduction to the Stata programming language and software environment for data exploration, data wrangling, data analysis, and visualization. The structure tries to follow the structure of the textbook , although there are of course some differences: the main organization principle of the lectures is the logic of Stata, not necessary the logic of the book. After going through the lectures, students will be able to reproduce the results of the first two parts of the textbook (Data Exploration, and Regression Analysis) in Stata. Moreover, they will hopefully also understand the language of Stata enough to be able to go on in the textbook, and do the exercises in the second two parts on their own.

Note that in the lectures I use Stata 14 , however, all the elements discussed here are compatible forward (and in most cases backward) as well.

Lectures 1 to 11 - complementing Part I: Data Exploration (Chapter 1-6) - focus the logic of the Stata language, data preparation and wrangling, exploratory data analysis, and hypothesis testing. Please note that the first lecture is boring, but unfortunately unavoidable. I tried to be as brief as possible there.

Lecture 12 to 14 - complementing PART II: Regression Analysis (Chapter 7-12) - focus on the basics of regression analysis, the presentation of regression results, and visualization.

Teaching philosophy

We believe in learning by doing, so although the lectures offer a detailed introduction to the topic with many explanations and examples, the more important part is the homework assignments that can help students practicing. We also recommend students to deal with the data exercises at the end of the chapters of the textbook.

This is not a hardcore coding course, but a course to supplement the material of the textbook. The lectures focus on the commands that are needed to reproduce the case studies and to solve the data exercises of the textbook.

The structure of the material reflects these principles. On one hand, the lectures include pre-written codes as an introduction to the topic, while, on the other hand, homework assignments and data exercises of the textbook can help students to gain experience in coding. In most cases, pre-written codes and homework assignments reproduce case study results that can be found in the textbook.

These lectures can serve as a basis for a course on Stata programming for data wrangling and basic regression analysis. Although, the series is structured and comprehensive enough to be able to stand alone, we recommend to teach (or and learn) it hand in hand with the textbook, since almost all examples are from the textbook.

This series of lectures does not need any prior knowledge in Stata programming.

The material is based on experience coming from years of teaching coding and empirical courses at Corvinus University of Budapest , being a research assistant and later researcher, and of course advice from many great resources such as

  • Getting Started with Stata for Windows by Stata Press
  • Economics Lesson with Stata by Data Carpentry
  • UCLA's Stata Learning Modules
  • Kurt Schmidheiny's brief intro document
  • Fundamentals of data analysis and visualization from a group of instructors
  • A huge collection of advanced Stata stuff on the Medium site
  • A great online training by SSCC
  • A four-piece tutorial by Germán Rodríguez from Princeton University

and many others, listed in the lecture's READMEs.

Lectures, contents, and case-studies

The following table shows a brief summary of the lectures: what is the type of the lecture, what is the expected learning outcome, and how it relates to the textbook's case studies and datasets.

Found an error or have a suggestion?

Awesome, we know there are errors and bugs. Or just much better ways to do a procedure.

To make a suggestion, please open a GitHub issue here with a title containing the case study name. You may also contact us directly .

Contributors 2

@tokeslaci

  • Stata 100.0%

IMAGES

  1. Test of Hypothesis

    hypothesis testing stata code

  2. Stata 28 test joint hypothesis with test and testparm functions

    hypothesis testing stata code

  3. Hypothesis Testing using STATA

    hypothesis testing stata code

  4. Hypothesis Tests in Regressions with Stata

    hypothesis testing stata code

  5. Hypothesis Testing using STATA

    hypothesis testing stata code

  6. Master Hypothesis Testing in Statistics Guide

    hypothesis testing stata code

VIDEO

  1. Hypothesis testing in STATA

  2. ECN225 Class 3 Question 1 Stata example, hypothesis testing

  3. Hypothesis Testing

  4. Lecture 9 hypothesis testing in stata

  5. Hypothsis Testing in Statistics Part 2 Steps to Solving a Problem

  6. Testing of Hypothesis

COMMENTS

  1. PDF test

    Test the equality of two linear expressions involving coefficients on x1 and x2 test 2*x1 = 3*x2 Shorthand varlist notation Joint test that all coefficients on the indicators for a are equal to 0 testparm i.a Joint test that all coefficients on the indicators for a and b are equal to 0 testparm i.a i.b

  2. An overview of multiple hypothesis testing commands in Stata

    We run the following treatment regressions for outcome j at time t: Y (j,t) = a + b1*treat1+b2*treat2+b3*treat3+b4*treat4 + c1*y (j,0) + d'X + e (j,t) With 5 outcomes and 4 treatments, we have 20 hypothesis tests and thus 20 p-values. Here's an example from a recent experiment of mine (where the outcomes are firm survival, and four types of ...

  3. 16 Hypothesis tests

    The Stata command to test the null hypothesis above is. ... The code to compute R in Stata is. correlate var1 var2. This computes R for var1 and var2. If you do not specify a variable list, Stata computes correlations between all non-string variables in your data set. 16.9 Exercise.

  4. An updated overview of multiple hypothesis testing commands in Stata

    An updated overview of multiple hypothesis testing commands in Stata. David McKenzie. July 20, 2021. This page in: English. Just over a year ago, I wrote a blog post comparing different user-written Stata packages for conducting multiple hypothesis test corrections in Stata. Several of the authors of those packages have generously upgraded the ...

  5. Hypothesis Testing in Stata: A Step-by-Step Guide

    Hypothesis testing is a crucial tool for data analysis and is used to determine the ... In this video, we will learn how to perform hypothesis testing in Stata.

  6. PDF Multiple Hypothesis Testing in Stata

    Figure: Simulated Power to Reject False Null Hypothesis 0.2.4.6.8 Proportion of Nulls Correctly Rejected 1 0 .2 .4 .6 .8 1 Value for b Uncorrected Bonferroni Holm Romano-Wolf (a) ρ = 0.25 0.2.4.6.8 1 Proportion of Nulls Correctly Rejected 0 .2 .4 .6 .8 1 Value for b (b) ρ = 0.75 Refer to section (4) of the accompanying Stata code multHyp.do ...

  7. PDF Title stata.com testnl

    3:::) for a simultaneous-equality hypothesis is just a convenient shorthand for a list (exp 1=exp 2) (exp 1=exp 3), etc. testnl may also be used to test linear hypotheses. test is faster if you want to test only linear hypotheses; see[R] test. testnl is the only option for testing linear and nonlinear hypotheses simultaneously. Quick start

  8. Hypothesis testing

    Statistical significance. Effect and direction are the two most important outcomes of data analysis, but it is not uncommon that research inquiry also focuses on a third point: statistical significance. Statistical significance can be seen as an indicator of the reliability of the results - although that is important, it is not what ...

  9. Hypothesis Tests in Stata

    How to implement a hypothesis test on the mean of a variable manually and using the test command.

  10. Stata Basics

    In this video, I illustrate 3 simple hypothesis tests for checking the relationship between two variables: the crosstab (1:10), the difference of means test ...

  11. Stata for Students: t-tests

    This article is part of the Stata for Students series. If you are new to Stata we strongly recommend reading all the articles in the Stata Basics section. t-tests are frequently used to test hypotheses about the population mean of a variable. The command to run one is simply ttest, but the syntax will depend on the hypothesis you want to test.

  12. PDF Title stata.com ttest

    Remarks and examples stata.com Remarks are presented under the following headings: One-sample t test Two-sample t test Paired t test Two-sample t test compared with one-way ANOVA Immediate form Video examples One-sample t test Example 1 In the first form, ttest tests whether the mean of the sample is equal to a known constant under

  13. T-test

    T-test | Stata Annotated Output. The ttest command performs t-tests for one sample, two samples and paired observations. The single-sample t-test compares the mean of the sample to a given number (which you supply). The independent samples t-test compares the difference in the means from the two groups to a given value (usually 0).

  14. Statistical hypothesis testing

    Conducting a statistical hypothesis test is easy to do in statistical software such as Stata. These tests give us a probability value (p-value) that can help us decide whether or not the null hypothesis should be rejected. See P-values for a further discussion.

  15. What statistical analysis should I use? Statistical analyses using Stata

    Version info: Code for this page was tested in Stata 12. Introduction. This page shows how to perform a number of statistical tests using Stata. Each section gives a brief description of the aim of the statistical test, when it is used, an example showing the Stata commands and Stata output with a brief interpretation of the output.

  16. Slides and code on multiple hypothesis testing corrections in Stata

    False discovery rate corrections Code is provided, focusing on computational implementations of the methods discussed in Stata. These slides and code cover the materials discussed during a number of talks.

  17. How to Perform a Two Sample t-test in Stata

    Example: Two Sample t-test in Stata. Researchers want to know if a new fuel treatment leads to a change in the average mpg of a certain car. To test this, they conduct an experiment in which 12 cars receive the new fuel treatment and 12 cars do not. Perform the following steps to conduct a two sample t-test to determine if there is a difference ...

  18. Week 5 : TUTORIAL: HYPOTHESIS TESTING IN STATA

    About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features NFL Sunday Ticket Press Copyright ...

  19. Hypotheses

    Let us return to the matter of statistical significance: what is it really? Well, for example, if we find that cats are smarter than dogs, we want to know whether this difference is "real". Hypothesis testing is how we may answer that question. We start by converting the question into two hypotheses: Null hypothesis. (H 0) There is no ...

  20. Hypothesis Tests with Linear Regression by using Stata

    Test whether two coefficients are equal. For example, the following command evaluates the null hypothesis H 0 2 = P 3. . test loggdp = adfert. Finally, test understands some algebraic expressions. We could request something like the following, which would test H 0 2 = (P 3 + P 4) / 100. . test school = (loggdp + adfert)/100.

  21. GitHub

    Note that in the lectures I use Stata 14, however, all the elements discussed here are compatible forward (and in most cases backward) as well. Lectures 1 to 11 - complementing Part I: Data Exploration (Chapter 1-6) - focus the logic of the Stata language, data preparation and wrangling, exploratory data analysis, and hypothesis testing.