Skip to Main Content

SPSS for Beginners

This guide provides a basic introduction to SPSS. It is recommended that have an in-depth understanding of the various statistical methods to use for your research study.

Parametric Inferential Statistics

Pearson correlation (sometimes referred to as Pearson’s r, bivariate correlation, or Pearson product-moment correlation coefficient) is used to determine the strength and direction of a relationship between two continuous variables. This test assumes both variables are approximately normally distributed.

Note that this is a parametric test; if the assumptions of this test are not met, you could / should instead run a Spearman’s rank-order correlation (i.e., the non-parametric alternative to a Pearson correlation). For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. Ideally, your p-value should be > .05, your histogram should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plot should be fairly close to the line.

HOW TO RUN A PEARSON CORRELATION

  1. Click on Analyze. Select Correlate. Select Bivariate.
  2. Place two or more variables in the “Variables” box.
  3. In the Correlation Coefficients section, ensure “Pearson” is checked.
  4. Click OK to run the test (results will appear in the output window).

The one sample t-test is used to determine whether the mean of a single continuous variable differs from a specified constant. This test assumes that the observations are independent and that the data are normally distributed.

Note that this is a parametric test; if the assumptions of this test are not met, you could / should instead run a Wilcoxon signed rank test (i.e., the non-parametric alternative to the one sample t-test). For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. Ideally, your p-value should be > .05, your histogram should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plot should be fairly close to the line.

HOW TO RUN A ONE-SAMPLE T-TEST

  1. Click on Analyze. Select Compare Means. Select One Sample t-test. 
  2. Place one or more variables in “Test Variable(s)”, and indicate your desired constant comparison in “Test Value”.
  3. Ensure the “Estimate effect sizes” box is checked.
  4. Click OK to run the test (results will appear in the output window).

The independent samples t-test (also referred to as a between subject’s t-test, student’s t-test, unpaired t-test, two-sample t-test…) is used to determine whether two groups’ means on the same continuous variable differ. This test assumes that the observations are independent, and are randomly sampled from normal distributions that have the same population variance.

Note that this is a parametric test; if the assumptions of this test are not met, you could / should instead run a Mann Whitney U test (i.e., the non-parametric alternative to the independent samples t-test). For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. Ideally, your p-value should be > .05, your histogram should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plot should be fairly close to the line.

HOW TO RUN INDEPENDENT SAMPLES T-TEST

  1. Click on Analyze. Select Compare Means. Select Independent Samples T-test. 
  2. Place one or more variables in “Test Variable(s)”, and one variable in “Grouping Variable”. Be sure to click “Define Groups” to ensure the groups you wish to compare are set up properly (e.g., maybe 0 and 1 or 1 and 2, or something else).
  3. Click OK to run the test (results will appear in the output window).

 

The paired samples t-test (also referred to as a dependent t-test, repeated samples t-test, matched pairs t-test…) is used to determine whether the means of two continuous variables (e.g., before and after treatment) from the same group of participants differ. This test assumes that the participants are independent from one another, while the two variables (e.g., measurements) are from the same participant. This test also assumes that the distribution of differences is normally distributed and there are no extreme outliers in the differences. 

Note that this is a parametric test; if the assumptions of this test are not met, you could / should instead run a Wilcoxon signed-rank test (i.e., the non-parametric alternative to the paired-samples t-test). For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. For this test, normality is often calculated on the DIFFERENCE SCORES between your two related groups. Ideally, your p-value should be > .05, your histogram should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plot should be fairly close to the line.

HOW TO RUN A PAIRED SAMPLE T-TEST

  1. Click on Analyze. Select Compare Means. Select Paired Samples T-test. 
  2. Place one or more variables in the “Variable 1” slot and one or more variables in the “Variable 2” slot in the Paired Variables window.
  3. Click OK to run the test (results will appear in the output window).

The one-way Analysis of Variance (ANOVA) is used to determine whether the means from three or more groups differ. This test assumes that each group is an independent random sample from normally distributed populations, and that the populations have equal variances. 

Note that this is a parametric test; if the assumptions of this test are not met, you could / should instead run a Kruskal-Wallis H test (i.e., the non-parametric alternative to the one-way ANOVA). For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. You will need to place your dependent variable in the “Dependent List” box, and your independent variable in the “Factor List” box. Ideally, your p-values should be > .05, your histograms should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plots should be fairly close to the line.

HOW TO RUN A ONE-WAY ANOVA

  1. Click on Analyze. Select Compare Means. Select One-Way ANOVA.
  2. Place your continuous dependent variable in the “Dependent List” slot, place your categorical independent variable in the “Factor” slot.
  3. If you wish to conduct pairwise multiple comparisons between means, click on Post Hoc and select the desired test (Bonferroni and Tukey’s are common). Click Continue to save your choices. 

4. Click on the Options Button. Check off Descriptives, Homogeneity of variance test, and Means plot. Click Continue to save your choices.

5. Click OK to run the test (results will appear in the output window).

The one-way repeated measures Analysis of Variance (ANOVA) is used to determine whether the means of three or more continuous variables OR measurements at three or more timepoints on the same continuous variable differ for one group. There are several assumptions of this test; two important ones you should consider are normality and sphericity.

Note that this is a parametric test; if the assumptions of this test are not met, you could / should instead run a Friedman test (i.e., the non-parametric alternative to the one-way repeated measures ANOVA). For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. You will need to place each of your groups in the “Dependent List” box. Ideally, your p-values should be > .05, your histograms should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plots should be fairly close to the line.

HOW TO RUN ONE-WAY REPEATED MEASURES ANOVA

  1. Click on Analyze. Select General Linear Model. Select Repeated Measures.
  2. In the “Repeated Measures Define Factor(s)” pop-up, input a name for your repeated measures variable in the “Within-subject Factor Name” box. Include how many levels of this factor (3+) you have in the “Number of Levels:” box. Click Add. Click Define.

  1. In the “Repeated Measures” pop-up, move the different levels of your repeated measures variable to the “Within-Subject Variables ():” box. NOTE: you need to have one column for each level of the factor (e.g., pre, during, and post measurements would each need to be in a separate column).

  1. If you want a plot, click Plots and move the within-subject factor from “Factors:” to “Horizontal Axis:”, click Add. Select whether you would like a line chart of a bar chart. Click Continue to save your choices.
  2. If you require post-hoc tests or estimated marginal means, make those selections in the “Post Hoc” tab or the “EM Means” tab. For example, to get the pairwise comparisons between the different groups, in the EM Means tab move the repeated measures variable to the “Display Means for:” section and check the “Compare main effects” box.
  3. Click OK to run the test (results will appear in the output window).

A factorial Analysis of Variance (factorial ANOVA) is used to determine whether the means from two or more variables / factors with two or more levels each differ (e.g., main effects), and whether any factors interact (e.g., interactions). If there are two factors, this is sometimes called a “two-way” ANOVA; if there are three factors, this is sometimes called a “three-way ANOVA”.

There are three separate kinds of factorial ANOVA: fully between factorial ANOVA (where both or all factors have levels that are entirely between-group), fully within factorial ANOVA (where both or all factors have levels that are entirely within-group), and mixed factorial ANOVA (where one or more factors are entirely between-group AND one or more factors are entirely within-group). 

Note that all three kinds of factorial ANOVA are parametric tests; if the assumptions of the test are not met, there is no non-parametric equivalent. You could consider transforming your dependent variable in some way and then running the parametric test (and all assumptions) again, though this does not guarantee normality. For a refresh on how to check normality, read the “Explore” procedure on the Descriptive Statistics SPSS LibGuide page. You will need to place your within / repeated variable(s) in the “Dependent List” box, and your between / independent variable(s) in the “Factor List” box. Ideally, your p-values should be > .05, your histograms should approximate a normal distribution (i.e., a standard “bell-shaped curve”), and the points on your Q-Q plots should be fairly close to the line. Additionally, your boxplots should indicate no outliers in any group.

FULLY BETWEEN FACTORIAL ANOVA

This is an extension of a one-way ANOVA, including two or more factors with two or more levels each that are fully “between” (i.e., each participant / sample is in only one condition). This test assumes that each participant / sample is in only one condition (independence of observations), that there should be no outliers, that your dependent variable is approximately normally distributed in each condition (normality), and that each condition has approximately equal variances (homogeneity).

HOW TO RUN A FULLY BETWEEN FACTORIAL ANOVA

  1. Click on Analyze. Select General Linear Model. Select Univariate.
  2. Place your continuous dependent variable in the “Dependent List” slot, place your categorical independent variables in the “Fixed Factor(s)” slot.

  1. If you want a plot, click Plots and move the independent variables from “Factors:” to “Horizontal Axis:”, and/or “Separate Lines”, and/or “Separate Plots”, click Add. (NOTE: SPSS will only allow you to plot up to 3 variables at once). Select whether you would like a line chart (best for time series data) or a bar chart (generally the best choice for ANOVA). Click Continue to save your choices.
  2. If you require post-hoc tests or estimated marginal means, make those selections in the “Post Hoc” tab (only between variables with >2 levels) or the “EM Means” tab. For example, to get the pairwise comparisons between the different groups, in the EM Means tab move the main effect (single independent variable) or interaction (independent variables with a “ * ” between them) variables to the “Display Means for:” section. Check the “Compare main effects” box and the “Compare simple main effects” box. Select “Bonferroni” from the drop-down menu. Click Continue to save your choices.
  3. In options, you can select multiple other helpful pieces of information, such as descriptive statistics, estimates of effect size, homogeneity tests, and heteroskedasticity tests. Click Continue to save your choices.
  4. Click OK to run the test (results will appear in the output window).
Library Homepage Facebook Youtube Instagram Twitter Telegram E-mail