Unlock hundreds more features
Save your Quiz to the Dashboard
View and Export Results
Use AI to Create Quizzes and Analyse Results

Sign inSign in with Facebook
Sign inSign in with Google
Quizzes > High School Quizzes > Mathematics

2019 International AP Statistics Practice Quiz

Sharpen your AP stats skills with MCQs

Difficulty: Moderate
Grade: Grade 12
Study OutcomesCheat Sheet
Colorful paper art promoting 2019 AP Stats Blitz high school practice quiz.

What is the mean of a dataset?
It is the range between the smallest and largest values.
It is the most frequently occurring value.
It is the arithmetic average of all data values.
It is the middle value when data are arranged in order.
The mean is defined as the arithmetic average of the data, calculated by summing all values and dividing by the number of observations. This measure provides a central value for the dataset.
Which measure of center is least affected by extreme values?
Range
Median
Mean
Mode
The median is the middle value of a dataset and is not easily influenced by outliers or extreme values. In contrast, the mean can be skewed by unusually high or low numbers.
In a stem-and-leaf plot, what represents the 'stem'?
The trailing digit(s) of the data values.
The leading digit(s) of the data values.
The median value of the dataset.
The frequency of each data value.
In a stem-and-leaf plot, the stem represents the leading digit(s) of the numbers, while the leaf provides the final digit. This method allows quick visualization of the distribution of the data.
A histogram is best used for which type of data?
Quantitative data
Ordinal data
Qualitative data
Categorical data
Histograms are designed to display the distribution of quantitative, continuous data. They help in visualizing the shape, central tendency, and spread of the dataset.
What does a box plot display?
The frequency distribution of the data
The median, quartiles, and potential outliers
Only the mean and standard deviation
The range and mode of the data
A box plot provides a summary of the data by displaying the median, the first and third quartiles, and potential outliers. It is a useful tool for quickly assessing the distribution and spread of the data.
Which of the following best describes a simple random sample?
Only individuals with certain characteristics are chosen.
Every individual is guaranteed to be selected.
Every possible sample of a given size is equally likely to be selected.
The sample is selected from convenient locations.
A simple random sample ensures that each possible sample of a specified size from the population has an equal chance of being chosen. This process minimizes selection bias.
In hypothesis testing, what does a p-value represent?
The probability that the sample data occurred by chance.
The probability that the alternative hypothesis is correct.
The probability of making a Type II error.
The probability of observing the given test statistic, or one more extreme, assuming the null hypothesis is true.
The p-value is the probability of obtaining a test statistic at least as extreme as the one calculated, given that the null hypothesis is true. It helps determine the strength of the evidence against the null hypothesis.
What is a Type I error in statistical inference?
Using an incorrect test statistic.
Rejecting a true null hypothesis.
Obtaining a large p-value.
Failing to reject a false null hypothesis.
A Type I error occurs when a true null hypothesis is rejected, often described as a false positive. This error signifies detecting an effect that does not actually exist.
Which test is most appropriate for assessing the relationship between two categorical variables in a two-way table?
t-test
Correlation coefficient
Chi-square test for independence
ANOVA
The chi-square test for independence is designed to evaluate whether there is a significant association between two categorical variables using a contingency table. It compares the observed frequencies to the expected frequencies under the assumption of independence.
In a normal distribution, approximately what percentage of data falls within one standard deviation of the mean?
50%
95%
99%
68%
According to the empirical rule for normal distributions, about 68% of the data lies within one standard deviation of the mean. This rule is a quick reference to understand data dispersion.
Why is random assignment important in an experiment?
To maximize the overall sample size.
To ensure that the treatment groups are equivalent on all factors except for the treatment.
To guarantee that the sample is representative of the population.
To eliminate the need for control groups.
Random assignment helps to distribute any confounding variables equally between the treatment and control groups. This ensures that any observed effect is likely due to the treatment rather than extraneous factors.
What happens to the standard error of the sample mean as the sample size increases?
It decreases.
It remains the same.
It increases.
It becomes zero.
As the sample size increases, the standard error, which is the standard deviation of the sampling distribution of the sample mean, decreases. This indicates that larger samples provide estimates that are closer to the true population mean.
What does the correlation coefficient measure?
The causality between two variables.
The strength and direction of the linear relationship between two variables.
The proportion of variability explained by an independent variable.
The degree of bias in the data sample.
The correlation coefficient measures both the strength and the direction of the linear relationship between two quantitative variables. It does not imply a causal relationship between the variables.
What is the primary purpose of a scatterplot in data analysis?
To compare proportions of categorical data.
To display the frequency distribution of a single variable.
To show the relationship between two quantitative variables.
To illustrate measures of central tendency.
A scatterplot is used to visualize the relationship between two quantitative variables. It helps in identifying patterns, trends, and potential outliers in the data.
Which of the following is an assumption necessary for using the t-distribution to construct confidence intervals for a mean?
The sampling distribution of the sample mean is heavily skewed.
The population standard deviation is known.
The sample is drawn from a nearly normally distributed population.
The sample size must be less than 30.
Using the t-distribution for constructing confidence intervals is appropriate when the sample is from a nearly normal population, particularly with small sample sizes. The population standard deviation is estimated by the sample standard deviation.
A data scientist fits a least squares regression line to predict exam scores from student study hours. What does the slope of the line represent?
The average exam score of all students.
The predicted change in exam score for a one-unit increase in study hours.
The correlation between study hours and exam scores.
The total accumulated study hours.
The slope of a regression line quantifies the expected change in the dependent variable for each one-unit increase in the independent variable. In this case, it represents how much the exam score is predicted to change as study hours increase.
In a double-blind experiment, which method best minimizes both subject and experimenter bias?
Random sampling alone.
Using a placebo control with neither the subject nor the experimenter knowing the treatment assignment.
Conducting an unblinded study work with experts.
Increasing the sample size significantly.
Double-blind experiments ensure that neither the participants nor the experimenters are aware of the treatment assignments. This procedure effectively minimizes bias stemming from either party.
A researcher reports a p-value of 0.03 when testing an association between two variables. Which interpretation is most accurate?
The data confirms that the alternative hypothesis is true.
There is a 97% probability that the alternative hypothesis is correct.
There is a 3% chance that the null hypothesis is true.
There is a 3% chance of obtaining the observed association if the null hypothesis were true.
A p-value of 0.03 indicates that there is a 3% probability of observing a result as extreme as, or more extreme than, the one obtained if the null hypothesis were true. It does not directly indicate the probability that the null hypothesis is true or false.
Which scenario best illustrates a confounding variable in an observational study?
Random assignment of participants to different groups.
Using a placebo to control for psychological effects.
A third variable that influences both the independent and dependent variables.
A study with a large and diverse sample.
A confounding variable is an external factor that influences both the independent and dependent variables, potentially skewing the observed relationship. Its presence can lead to incorrect conclusions about causality if not properly controlled.
In a study, if the confidence interval for a population mean does not include the hypothesized value under the null hypothesis, what does this imply about the hypothesis test at the same confidence level?
The sample size was likely too small.
The null hypothesis would be rejected.
The test would have low statistical power.
The null hypothesis would be accepted.
When the hypothesized value is not contained within the confidence interval for the population mean, it suggests that the sample data provides sufficient evidence against the null hypothesis. Therefore, at the corresponding significance level, the null hypothesis would be rejected.
0
{"name":"What is the mean of a dataset?", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"What is the mean of a dataset?, Which measure of center is least affected by extreme values?, In a stem-and-leaf plot, what represents the 'stem'?","img":"https://www.quiz-maker.com/3012/images/ogquiz.png"}

Study Outcomes

  1. Understand key statistical concepts and terminology.
  2. Apply probability theory to real-world data scenarios.
  3. Analyze data sets using sampling techniques and experimental design.
  4. Interpret graphical representations and summarize data findings.
  5. Evaluate statistical significance and test validity of conclusions.

2019 Int'l AP Stats MCQ Practice Exam Cheat Sheet

  1. Mean, Median, and Mode - These three measures are your go‑to squad for finding the center of any data set. The mean gives an overall average (but can be thrown off by outliers), the median locks down the middle value, and the mode points to the most frequent guest. Need a formula refresher? AP Statistics Formulas - Statistics How To
  2. Standard Deviation and Variance - Here's how you quantify data spread: variance squares each deviation and averages them (Variance = Σ(X - μ)² / (n - 1)), while standard deviation brings it back to the original units. The bigger the result, the more scattered your data. Crunch the numbers with this handy guide: AP Statistics Formulas - Statistics How To
  3. Empirical Rule - If your data plays nice with a normal curve, about 68% falls within 1 standard deviation, 95% within 2, and a whopping 99.7% within 3. This quick rule of thumb helps you eyeball where most of your values live. Get more deets here: Statistics Final Exam Study Guide: Key Concepts & Formulas - Student Notes
  4. Z-Scores - Z-scores turn raw values into standardized scores by calculating (x - μ) / σ, so you can compare apples and oranges. A positive z-score means you're above the mean, and a negative z-score shows you're below. Find all the essential formulas here: AP Statistics Formula Sheet: Essential Equations for the Exam
  5. Sampling Distributions & Central Limit Theorem - Sampling distributions show how a statistic (like the mean) varies from sample to sample. The Central Limit Theorem then swoops in to tell us that, as sample sizes grow, the distribution of sample means looks more and more like a normal curve. Trust the magic of numbers with this overview: AP Statistics Formulas - Statistics How To
  6. Confidence Intervals - Want a range that likely captures the true population parameter? Use: Sample statistic ± Critical value × Standard error, and voila! A 95% confidence interval means you'd capture that parameter in 95 out of 100 samples. Practice constructing them here: AP Statistics Formulas - Stat Trek
  7. Hypothesis Testing - This is your statistical showdown: state H₀ and H₝, pick a significance level (like α=0.05), calculate your test statistic, then decide to reject or fail to reject H₀. It's basically a scientific trial by numbers that keeps your conclusions legit. Brush up on each step: AP Statistics Formulas - Statistics How To
  8. Linear Regression - Model the relationship between two variables using ŷ = b₀ + b₝x, where b₝ is your slope and b₀ is your intercept. This equation predicts outcomes and tells you how strongly variables are tied together. Get your regression gears turning: AP Statistics Formulas - Statistics How To
  9. Chi-Square Tests - Perfect for categorical data, the chi-square test (Χ² = Σ[(O - E)² / E]) checks if your observed frequencies match expected ones. It's like a detective making sure your data's categories behave as predicted. Solve those frequency mysteries here: AP Statistics Formulas - Stat Trek
  10. Probability Rules - The Addition Rule (P(A ∪ B) = P(A) + P(B) - P(A ∩ B)) and Multiplication Rule (P(A ∩ B) = P(A) P(B|A)) form your probability power duo. Whether you're combining events or finding joint chances, these formulas are your toolkit for risk assessment. Level up your probability prowess here: AP Statistics Formulas - Statistics How To
Powered by: Quiz Maker