FSSD¶
- class hyppo.kgof.FSSD(p, k, V, null_sim=<hyppo.kgof.fssd.FSSDH0SimCovObs object>, alpha=0.01)¶
Goodness-of-fit test using The Finite Set Stein Discrepancy statistic. and a set of paired test locations. The statistic is n*FSSD^2. The statistic can be negative because of the unbiased estimator.
\[\begin{split}H_0 &: \text{ the sample follows } p \\ H_A &: \text{ the sample does not follow } p\end{split}\]\(p\) is specified to the constructor in the form of an UnnormalizedDensity.
Notes
Given a known probability density \(p\) (model) and a sample \(\{ \mathbf{x}_i \}_{i=1}^n \sim q\) where \(q\) is an unknown density, the GoF test tests whether or not the sample \(\{ \mathbf{x}_i \}_{i=1}^n\) is distributed according to a known \(p\).
The implemented test relies on a new test statistic called The Finite-Set Stein Discrepancy (FSSD) 1. which is a discrepancy measure between a density and a sample. Unique features of the new goodness-of-fit test are:
It makes only a few mild assumptions on the distributions \(p\) and \(q\). The model \(p\) can take almost any form. The normalizer of \(p\) is not assumed known. The test only assesses the goodness of \(p\) through \(\nabla_{\mathbf{x}} \log p(\mathbf{x})\) i.e., the first derivative of the log density.
The runtime complexity of the full test (both parameter tuning and the actual test) is \(\mathcal{O}(n)\) i.e., linear in the sample size.
It returns a set of points (features) which indicate where \(p\) fails to fit the data.
The FSSD test requires that the derivative of \(\log p\) exists. The test requires a technical condition called the "vanishing boundary" condition for it to be consistent. The condition is \(\lim_{\|\mathbf{x} \|\to \infty} p(\mathbf{x}) \mathbf{g}(\mathbf{x}) = \mathbf{0}\) where \(\mathbf{g}\) is the so called the Stein witness function which depends on the kernel and \(\nabla_{\mathbf{x}} \log p(\mathbf{x})\). For a density \(p\) which has support everywhere e.g., Gaussian, there is no problem at all. However, for a density defined on a domain with a boundary, one has to be careful. For example, if \(p\) is a Gamma density defined on the positive orthant of \(\mathbb{R}\), the density itself can actually be evaluated on negative points. Looking at the way the Gamma density is written, there is nothing that tells the test that it cannot be evaluated on negative orthant. Therefore, if \(p\) is Gamma, and the observed sample also follows \(p\) (i.e., \(H_0\) is true), the test will still reject \(H_0\)! The reason is that the data do not match the left tail (in the negative region!) of the Gamma. It is necessary to include the fact that negative region has 0 density into the density itself.
Methods Summary
Compute the feature tensor which is n x d x J. |
|
Calculate the mean and variance under H1 of the test statistic (divided by n). |
|
|
Compute the test statistic. |
|
Perform the goodness-of-fit test using an FSSD test statistic and return values computed in a dictionary. |
- FSSD.feature_tensor(X)¶
Compute the feature tensor which is n x d x J. The feature tensor can be used to compute the statistic, and the covariance matrix for simulating from the null distribution.
- Parameters
X (
n x d data numpy array
)- Returns
Xi (
an n x d x J numpy array
)
- FSSD.get_H1_mean_variance(X)¶
Calculate the mean and variance under H1 of the test statistic (divided by n).
- Parameters
dat (
an instance
ofData
)- Returns
mean (
the mean under the alternative hypothesis
ofthe test statistic
)variance (
the variance under the alternative hypothesis
ofthe test statistic
)
- FSSD.statistic(X, return_feature_tensor=False)¶
Compute the test statistic. The statistic is n*FSSD^2.
- Parameters
dat (
an instance
ofData
)- Returns
stat (
the test statistic
,n*FSSD^2
)
- FSSD.test(X, return_simulated_stats=False)¶
Perform the goodness-of-fit test using an FSSD test statistic and return values computed in a dictionary.
- Parameters
dat (
an instance
ofData
)- Returns
results (
a dictionary containing alpha
,p-value
,test statistic,
) -- and null hypothesis rejection status