Unlock hundreds more features
Save your Quiz to the Dashboard
View and Export Results
Use AI to Create Quizzes and Analyse Results

Sign inSign in with Facebook
Sign inSign in with Google

Take the Psychology Statistics Quiz

Boost Your Data Analysis and Research Skills

Difficulty: Moderate
Questions: 20
Learning OutcomesStudy Material
Colorful paper art displaying questions for a Psychology Statistics Quiz.

Welcome to this Psychology Statistics Quiz, where data meets the fascinating world of human behaviour. Designed by Joanna Weib to inspire, it's perfect for psychology students or educators seeking an interactive statistics quiz that covers data analysis, p-values, and experimental design. Every one of the 15 multiple-choice questions is fully editable in our quiz editor, so you can tailor it to your curriculum or study needs. Once you're done, you might also enjoy the Psychology Knowledge Assessment Quiz or the Statistics Research Methods Quiz . And don't forget to explore more quizzes to deepen your understanding and boost your research confidence!

Which measure of central tendency is most affected by extreme scores in a dataset?
Mode
Mean
Median
Range
The mean includes all values and is therefore influenced by very high or low scores. Median and mode are less sensitive to outliers and the range is a measure of spread, not central tendency.
What does a p-value less than .05 typically indicate in hypothesis testing?
Probability that the null hypothesis is true is less than 5%
Proof that the alternative hypothesis is true
Evidence against the null hypothesis at the 5% significance level
That 95% of the data fall within the confidence interval
A p-value below .05 means the observed data are unlikely under the null hypothesis, so you reject the null at the 5% level. It does not provide the probability that the null hypothesis is true.
Which of the following statistics describes the average distance of scores from the mean?
Mode
Standard deviation
Range
Median
Standard deviation measures how spread out values are around the mean by computing the average distance of each score from the mean. Range is the difference between highest and lowest, and median and mode are measures of central tendency.
Which statistical test is most appropriate for comparing the means of two independent groups?
Paired samples t-test
One-way ANOVA
Independent samples t-test
Chi-square test of independence
An independent samples t-test compares the means of two distinct groups on a continuous outcome. A paired t-test is for dependent or matched samples, ANOVA for three or more groups, and chi-square for categorical variables.
A Pearson correlation coefficient of -0.6 between stress and performance indicates what kind of relationship?
No relationship
A strong negative linear relationship
A weak negative relationship
A moderate positive relationship
The negative sign shows that as stress increases, performance tends to decrease, and a magnitude of 0.6 is generally interpreted as a strong correlation in psychological research. Positive or weak labels do not match this value.
Which statistical test would be most appropriate for comparing the means across three independent groups?
Chi-square test of independence
One-way ANOVA
Repeated-measures ANOVA
Independent samples t-test
One-way ANOVA tests for mean differences across three or more independent groups on a continuous outcome. The independent t-test only compares two groups, and repeated-measures ANOVA is for within-subject designs.
In the context of statistical significance, which statement best describes a p-value of .02?
There is a 2% chance the alternative hypothesis is true
The probability of observing data as extreme or more, assuming the null hypothesis is true, is 2%
The null hypothesis is false with 98% certainty
2% of the data fall below the mean
A p-value of .02 means that if the null hypothesis were true, there is a 2% chance of obtaining the observed result or something more extreme. It does not directly give the probability that the hypothesis itself is true.
When analyzing ordinal data, which correlation coefficient is most suitable?
Spearman's rho
Point-biserial correlation
Chi-square correlation
Pearson's r
Spearman's rho is designed for ordinal or rank-ordered data. Pearson's r assumes interval-level data and linear relationships, while point-biserial is for one continuous and one dichotomous variable.
In simple linear regression, which variable is considered the predictor?
Residual
Error term
Independent variable
Dependent variable
The independent variable, also known as the predictor, is used to predict or explain variation in the dependent variable. The residual is the difference between observed and predicted values.
If a regression model has an R-squared value of 0.64, how much variance in the dependent variable is explained by the model?
6.4%
64%
36%
16%
R-squared represents the proportion of variance in the dependent variable explained by the predictor(s). An R-squared of 0.64 means 64% of the variance is accounted for by the model.
Which type of error occurs when a true null hypothesis is incorrectly rejected?
Type II error
Type III error
Type IV error
Type I error
A Type I error, or false positive, happens when the null hypothesis is true but is rejected in error. A Type II error fails to reject a false null hypothesis.
Which inferential test would you use to examine the relationship between two categorical variables?
Chi-square test of independence
Independent samples t-test
One-way ANOVA
Pearson correlation
The chi-square test of independence evaluates whether two categorical variables are related. T-tests and ANOVAs compare means of continuous outcomes, and correlations measure linear relationships between continuous variables.
What is the null hypothesis in a Pearson correlation analysis?
The sample correlation coefficient is positive
The variables have equal variances
The population correlation coefficient equals zero
There is a perfect linear relationship
The null hypothesis for correlation states that there is no linear relationship between two variables in the population (ϝ = 0). It does not assume any direction or equality of variances.
When should you use a repeated-measures ANOVA instead of an independent samples ANOVA?
When the sample size is very large
When the dependent variable is categorical
When comparing two different groups
When the same participants are measured under different conditions
Repeated-measures ANOVA is used when the same subjects are observed under multiple conditions or time points. Independent samples ANOVA is for different participants in each group.
In regression analysis, what does a standardized beta coefficient represent?
The proportion of variance explained by the model
The significance level of the predictor
The unstandardized slope of the regression line
The change in the dependent variable, in standard deviations, for a one-standard-deviation change in the predictor
A standardized beta coefficient shows how many standard deviations the dependent variable will change per standard deviation increase in the predictor. This allows comparison across predictors with different units.
In multiple regression analysis, what does a variance inflation factor (VIF) above 10 typically indicate?
Multicollinearity among predictors
Heteroscedasticity in residuals
Non-linearity of relationships
Autocorrelation of errors
A VIF above 10 suggests that one predictor is highly linearly correlated with other predictors, indicating multicollinearity. High multicollinearity inflates standard errors and can distort coefficient estimates.
Which test is commonly used to assess the normality of residuals in a regression model?
Levene's test
Bartlett's test
Shapiro - Wilk test
Durbin - Watson test
The Shapiro - Wilk test evaluates whether residuals deviate from a normal distribution. Durbin - Watson tests autocorrelation, Levene's tests equality of variances, and Bartlett's also tests homogeneity of variance.
When dealing with three related samples that violate the assumption of normality, which non-parametric test should be applied?
Mann - Whitney U test
Wilcoxon signed-rank test
Friedman test
Kruskal - Wallis test
The Friedman test is a non-parametric alternative to repeated-measures ANOVA for three or more related samples. Kruskal - Wallis is for independent groups, and Wilcoxon and Mann - Whitney compare two samples.
What does a partial correlation control for in an analysis?
Variance of the control variable removed from the criterion only
Variance of the control variable removed from both the predictor and criterion variables
Variance of the control variable removed from the predictor only
Total variance explained by all predictors
Partial correlation removes the influence of a control variable from both the predictor and outcome before assessing their relationship. Semi-partial (or part) correlation removes it from only one variable.
Why are post hoc tests conducted after obtaining a significant F-value in ANOVA?
To test for normality of residuals
To identify which specific group means differ from each other
To calculate p-values for each dependent variable
To increase the overall sample size
A significant omnibus F-value indicates that at least one group mean differs but does not specify which. Post hoc tests allow pairwise comparisons to determine exactly where the differences lie.
0
{"name":"Which measure of central tendency is most affected by extreme scores in a dataset?", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Which measure of central tendency is most affected by extreme scores in a dataset?, What does a p-value less than .05 typically indicate in hypothesis testing?, Which of the following statistics describes the average distance of scores from the mean?","img":"https://www.quiz-maker.com/3012/images/ogquiz.png"}

Learning Outcomes

  1. Analyse correlations and regressions in psychological data
  2. Interpret p-values and statistical significance accurately
  3. Apply descriptive statistics to behavioural research findings
  4. Identify appropriate statistical tests for psychological studies
  5. Evaluate experimental designs using inferential statistics

Cheat Sheet

  1. Understanding Correlation Coefficients - Ever wondered how two variables dance together? Pearson's r tells you the strength and direction of their dance, from a perfect negative tango at - 1 to a joyful positive waltz at +1. An r of - 0.65, for instance, means a moderately strong backward step between the variables. Correlation & Regression in Psychology
  2. Interpreting P-Values Correctly - P-values reveal the surprise factor: the probability of seeing data as extreme as yours if the null hypothesis were true. Hitting p < 0.05 is like crossing the finish line for "statistical significance," but it doesn't tell you how big or meaningful the effect really is. Always pair p-values with context, confidence intervals, and effect sizes for a full story. Statistics Organization Speaks Out on P-Values
  3. Applying Descriptive Statistics - Descriptive stats are your best friends when summarizing data: mean shows the average, median the midpoint, mode the most popular, and standard deviation the spread. Together, they turn mountains of numbers into easy-to-grasp nuggets, helping you spot patterns and outliers in a snap. Mastering these tools means turning raw data into clear, compelling stories. Descriptive Statistics Explained
  4. Selecting Appropriate Statistical Tests - Not every question calls for the same statistical detective. T-tests compare two groups, ANOVAs juggle more than two, and chi-square checks categorical puzzles. Pick the right test based on your data type and hypothesis to avoid misleading conclusions - and don't forget to check assumptions like normality and equal variances! Statistical Hypothesis Testing
  5. Evaluating Experimental Designs - Good experiments are like well-built bridges: they stand strong and reliable. Use inferential stats to assess validity (are you measuring what you think you are?) and reliability (would you get the same result again?). Randomization, control groups, and blinding help keep your results honest and chance at bay. Experimental Design Basics
  6. Recognizing the Limitations of P-Values - P-values won't tell you if your hypothesis is true, nor do they gauge effect size or practical importance. They simply flag whether results are unlikely under the null hypothesis. Always interpret p-values with caution, and pair them with confidence intervals and real-world context. Statistics Organization Speaks Out on P-Values
  7. Avoiding Dichotomous Thinking - Science isn't a binary "yes/no" switch; it's more like a dimmer. Labeling results simply as "significant" or "not significant" can hide the shades of gray in your data. Instead, consider effect size, confidence intervals, and the broader research context for a nuanced interpretation. Dichotomous Thinking
  8. Understanding Effect Size - Effect size measures how big the party really is, beyond whether it even happened. Small effects can be statistically significant with huge samples, while dramatic effects in tiny samples might go unnoticed. Knowing the magnitude of your findings helps you judge their real-world impact. Effect Size Explained
  9. Considering Sample Size Impact - With more data points, even tiny effects can become "statistically significant," like finding a needle in a haystack by piling on more hay. Always weigh significance against sample size and practical relevance. Bigger isn't always better - sometimes it's just more of the same. Understanding P-Values & Significance Testing
  10. Ensuring Reproducibility - A one-off result is exciting but may be a fluke. True scientific power lies in reproducibility: repeating experiments and finding the same result. Each failed replication raises the odds that the original was a false positive, so document methods meticulously and share your data! Statistical Reproducibility
Powered by: Quiz Maker