Your p-values Are Bogus
People often use a Gaussian to approximate distributions of sample means. This is generally justified by the central limit theorem, which states that the sample mean of an independent and identically distributed sequence of random variables converges to a normal random variable in distribution.1 In hypothesis testing, we might use this to calculate a -value, which then is used to drive decision making.
I’m going to show that calculating -values in this way is actually incorrect, and leads to results that get less accurate as you collect more data! This has substantial implications for those who care about the statistical rigor of their A/B tests, which are often based on Gaussian (normal) approximations.
A Simple Example
Let’s take a very simple example. Let’s say that the prevailing wisdom is that no more than 20% of people like rollerskating. You suspect that the number is in fact much larger, and so you decide to run a statistical test. In this test, you model each person as a Bernoulli random variable with parameter . The null hypothesis is that . You decide to go out and ask 100 people their opinions on rollerskating.2
You begin gathering data. Unbeknownst to you, it is in fact the case that a full 80% of the population enjoys rollerskating. So, as you randomly ask people if they enjoy rollerskating, you end up getting a lot of “yes” responses. Once you’ve gotten 100 responses, you start analyzing the data.
It turns out that you got 74 “yes” responses, and 26 “no” responses. Since you’re a practiced statistician, you know that you can calculate a -value by finding the probability that a binomial random variable with parameter would generate a value with . This probability is just
However, you know that you can approximate a binomial distribution with a Gaussian of mean and variance , so you decide to calculate an approximate -value,
However, this approximation is actually incorrect, and will give you progressively worse estimates of . Let’s observe this in action.
Python Simulation of Data
We simulate data for values through , and compute the corresponding exact and approximate -value. We plot the log of the value, since they get very small very quickly.
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import norm, binom
plt.style.use( ['classic', 'ggplot'])
p_true = 0.8
n = 1000
data = binom.rvs(1, p_true, size=n)
p0 = 0.2
p_vals = pd.DataFrame(
index=range(1,n),
columns=['true p-value', 'normal approx. p-value']
)
for n0 in range(1,n):
normal_dev = np.sqrt(n0*p0*(1-p0))
normal_mean = n0*p0
k = sum(data[:n0])
# the "survival function" is 1 - cdf, which is the p-value in our case
normal_logpval = norm.logsf(k, loc=normal_mean, scale=normal_dev)
true_logpval = binom.logsf(k=k, n=n0, p=p0)
p_vals.loc[n0, 'true p-value'] = true_logpval
p_vals.loc[n0, 'normal approx. p-value'] = normal_logpval
p_vals.replace([-np.inf, np.inf], np.nan).dropna().plot(figsize = (8,6));
plt.xlabel("Number of Samples")
plt.ylabel("Log-p Value");
We have to drop inf
s because after about or so, the -value actually
gets too small for scipy.stats
to calculate; it just returns -np.inf
.
The resulting plot tells a shocking tale:
The approximation diverges from the exact value! Seeing this, you begin to weep bitterly. Is the Central Limit Theorem invalid? Has your whole life been a lie? It turns out that the answer to the first is a resounding no, and the second… probably also no. But then what is going on here?
Convergence Is Not Enough
The first thing to note is that, mathematically speaking, the two -values and do, in fact, converge. That is to say, as we increase the number of samples, their difference is approaching zero:
What I’m arguing, then, is that convergence is not enough.
If it were, then we could just approximate the true -value with 0. That is, we could report a -value of , and claim that since our approximation is converging to the actual value, it should be taken seriously. Obviously, this should not be taken seriously as an approximation.
Our intuitive sense of “convergence”, the sense that is becoming “a better and better approximation of” as we take more samples, corresponds to the percent error converging to zero:
In terms of asymptotic decay, this is a stronger claim than convergence. Rather than their difference converging to zero, which means it is , we demand that their difference converge to zero faster than ,
It would also suffice to have an upper bound on the -value; that is, if we could say that , so is at worst our approximate value , and we knew that this held regardless of sample size, then we could report our approximate result knowing that it was at worst a bit conservative. However, as far as I can see, the central limit theorem and other similar convergence results give us no such guarantee.
Implications
What I’ve shown is that for the simple case above, Gaussian approximation is not a strategy that will get you good estimates of the true -value, especially for large amounts of data. You will under-estimate your -value, and therefore overestimate the strength of evidence you have against the null hypothesis.
Although A/B testing is a slightly more complex scenario, I suspect that the same problem exists in that realm. A refresher on a typical A/B test scenario: you, as the administrator of the test, care about the difference between two sample means. If they samples are from Bernoulli random variables (a good model of click-through rates), then the true distribution of this difference is the distribution of the difference of (scaled) binomial random variables, which is more difficult to write down and work with. Of course, the Gaussian approximation is simple, since the difference of two Gaussians is again a Gaussian.3
Most statistical tests are approximate in this way. For example, the test for goodness of fit is an approximate test. So what are we to make of the fact that this approximation does not guarantee increasingly valid -values? Honestly, I don’t know. I’m sure that others have considered this issue, but I’m not familiar with the thinking of the statistical community on it. (As always, please comment if you know something that would help me understand this better.) All I know is that when doing tests like this in the future, I’ll be much more careful about how I report my results.
Afterword: Technical Details
As I said above, the two -values do, in fact, converge. However, there is an interesting mathematical twist in that the convergence is not guaranteed by the central limit theorem. It’s a bit besides the point, and quite technical, but I found it so interesting that I thought I should write it up.
As I said, this section isn’t essential to my central argument about the insufficiency of simple convergence; it’s more of an interesting aside.
Limitations of the Central Limit Theorem
To understand the problem, we have to do a deep dive into the details of the central limit theorem. This will get technical. The TL;DR is that since our -values are getting smaller, the CLT doesn’t actually guarantee that they will converge.
Suppose we have a sequence of random variables . These would be, in the example above, the Bernoulli random variables that represent individual people’s responses to your question about rollerskates. Suppose that these random variables are independent and identically distributed, with mean and finite variance .4
Let be the sample mean of all the up through :
We want to say what distribution the sample mean converges to. First, we know it’ll converge to something close to the mean, so let’s subtract that off so that it converges to something close to zero. So now we’re considering . But we also know that the standard deviation goes down like , so to get it to converge to something stable, we have to multiply by . So now we’re considering the shifted and scaled sample mean .
The central limit theorem states that this converges in distribution to a normal random variable with distribution . Notationally, you might see mathematicians write
What does it mean that they converge in distribution? It means that, for a fixed area, the areas under the respective curves converge. Note that we have to fix the area to get convergence. Let’s look at some pictures. First, note that we can plot the exact distribution of the variable ; it’s just a binomial random variable, appropriately shifted and scaled. We’ll plot this alongside the normal approximation .
The area under the shaded part of the normal converges to the area of the bars in that same shaded region. This is what convergence in distribution means.
Now for the crux. As we gather data, it becomes more and more obvious that our null hypothesis is incorrect - that is, we move further and further out into the tail of the null hypothesis’ distribution for . This is very intuitive - as we gather more data, we expect our -value to go down. The -value is a tail integral of the distribution, so we expect to be moving further and further into the tail of the distribution.
Here’s a gif, where the shaded region represents the -value that we’re calculating:
As we increase , the area we’re integrating changes. So we don’t get convergence guarantees from the CLT.
The Berry-Esseen Theorem
It’s worth noting that there is a stronger statement of convergence that applies specifically to the convergence of the binomial distribution to the corresponding Gaussian. It is called the Barry-Esseen Theorem, and it states that the maximum distance between the cumulative probability functions of the binomial and the corresponding Gaussian is . This claim, which is akin to uniform convergence of functions (compare to the pointwise convergence of the CLT) does, in fact, guarantee that our -values will converge.
But, as I’ve said above, this is immaterial, albeit interesting; we know already that the -values converge, and we also know that this is not enough for us to be reporting one as an approximation of the other.
-
So long as the variance of the distribution being sampled is finite. ↩
-
You should decide this number based on some alternative hypothesis and a power analysis. Also, you should ensure that you are sampling people evenly - going to a park, for example, might bias your sample towards those that enjoy rollerskating. ↩
-
I haven’t done a numerical test on this scenario because the true distribution (the difference between two scaled binomials) is nontrivial to calcualte, and numerical issues arise as we calculate such small -values, which SciPy takes care of for us in the above example. But as I said, I would be unsurprised if our Gaussian-approximated -values are increasingly poor approximations of the true -value as we gather more samples. ↩
-
In our case, for a single Bernoulli random variable with parameter , we have and . ↩
Comments