NIPY logo

Site Navigation

NIPY Community

Table Of Contents

Previous topic

neurospin.utils.ffx

Next topic

neurospin.utils.reproducibility_measures

This Page

neurospin.utils.random_threshold

Module: neurospin.utils.random_threshold

Functions

nipy.neurospin.utils.random_threshold.isolated(XYZ, k=18)

Outputs an index I of isolated points from their integer coordinates, XYZ (3, n), and under k-connectivity, k = 6, 18 or 24.

nipy.neurospin.utils.random_threshold.null_distribution(n, nsimu)

Sample from test statistic null distribution Syntax: S = null_distribution(n,nsimu) n,nsimu: Number of observations, sample size S (nsimu,): sample Pretty costly. Simpler is to use the approximate tail probability: P(T > 0.65 | H0) = 0.05

nipy.neurospin.utils.random_threshold.randthresh(Y, K, p=inf, stop=False, verbose=False, varwind=False, knownull=True)

Wrapper for random threshold functions (without connexity constraints) In: Y (n,) Observations

K <int> Some positive integer (lower bound on the number of null hypotheses) p <float> lp norm stop <bool> Stop when minimum is attained (save computation time) verbose <bool> ‘Chatty’ mode varwind <bool> Varying window variant (vs. fixed window, with width K) knownull <bool> Known null distribution (observations assumed Exp(1) under H0)

versus unknown (observations assumed Gaussian under H0)
Out: A dictionary D containing the following fields:
“C” (n-K) Lp norm of partial sums fluctuation about their conditional expectation “thresh” <float> Detection threshold “detect” (k,) Index of detected activations “v” <float> Estimated null variance (if knownull is False)
Note: Random thresholding is performed only if null hypothesis of no activations is rejected
at level 5%
nipy.neurospin.utils.random_threshold.randthresh_connex(Y, K, XYZ, p=inf, stop=False, verbose=False, varwind=False, knownull=True)

Wrapper for random threshold functions under connexity constraints In: Y (n,) Observations

K <int> Some positive integer (lower bound on the number of null hypotheses) XYZ (3,n) voxel coordinates p <float> lp norm stop <bool> Stop when minimum is attained (save computation time) verbose <bool> ‘Chatty’ mode varwind <bool> Varying window variant (vs. fixed window, with width K) knownull <bool> Known null distribution (observations assumed Exp(1) under H0)

versus unknown (observations assumed Gaussian under H0)
Out: A dictionary D containing the following fields:
“C” (n-K) Lp norm of partial sums fluctuation about their conditional expectation “thresh” <float> Detection threshold “detect” (ncoeffs,) Index of detected voxels
Note: Random thresholding is performed only if null hypothesis of no activations is rejected
at level 5%
nipy.neurospin.utils.random_threshold.randthresh_fixwind_gaussnull(Y, K, p=inf, stop=False, one_sided=False, verbose=False)

Random threshold with fixed window and null gaussian distribution In: Y (n,) Observations (assumed Gaussian under H0, with unknown variance)

K <int> Some positive integer (lower bound on the number of null hypotheses) p <float> lp norm stop <bool> Stop when minimum is attained (save computation time) one_sided <bool> If nonzero means are positive only (vs. positive or negative)

Out: C (n-K) Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_fixwind_gaussnull_connex(X, K, XYZ, p=inf, stop=False, verbose=False)

Random threshold with fixed-window and gaussian null distribution, using connexity constraint on non-null set. In: X (n,): Observations (assumed Gaussian under H0)

XYZ (3,n): voxel coordinates K <int>: Some positive integer (lower bound on the number of null hypotheses) p <float>: Lp norm stop <bool>: Stop when minimum is attained (save computation time)

Out: C (n-K): Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_fixwind_knownull(X, K, p=inf, stop=False, verbose=False)

Random threshold with fixed-window and known null distribution In: X (n,): Observations (must be Exp(1) under H0)

K <int>: Some positive integer (lower bound on the number of null hypotheses) p <float>: Lp norm stop <bool>: Stop when minimum is attained (save computation time)

Out: C (n-K): Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_fixwind_knownull_connex(X, K, XYZ, p=inf, stop=False, verbose=False)

Random threshold with fixed-window and known null distribution, using connexity constraint on non-null set. In: X (n,): Observations (must be Exp(1) under H0)

XYZ (3,n): voxel coordinates K <int>: Some positive integer (lower bound on the number of null hypotheses) p <float>: Lp norm stop <bool>: Stop when minimum is attained (save computation time)

Out: C (n-K): Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_main(Y, K, XYZ=None, p=inf, varwind=False, knownull=True, stop=False, verbose=False)

Wrapper for random threshold functions In: Y (n,) Observations

K <int> Some positive integer (lower bound on the number of null hypotheses) XYZ (3, n) voxel coordinates. If not empty, connexity constraints are used

on the non-null set

p <float> lp norm varwind <bool> Varying window variant (vs. fixed window, with width K) knownull <bool> Known null distribution (observations assumed Exp(1) under H0)

versus unknown (observations assumed Gaussian under H0)

stop <bool> Stop when minimum is attained (save computation time) verbose <bool> ‘Chatty’ mode

Out: A dictionary D containing the following fields:
“C” (n-K) Lp norm of partial sums fluctuation about their conditional expectation “thresh” <float> Detection threshold “detect” (k,) Index of detected activations
Note: Random thresholding is performed only if null hypothesis of no activations is rejected
at level 5%
nipy.neurospin.utils.random_threshold.randthresh_varwind_gaussnull(Y, K, p=inf, stop=False, one_sided=False, verbose=False)

Random threshold with fixed window and gaussian null distribution In: Y (n,) Observations (assumed Gaussian under H0, with unknown variance)

K <int> Some positive integer (lower bound on the number of null hypotheses) p <float> lp norm stop <bool> Stop when minimum is attained (save computation time) one_sided <bool> If nonzero means are positive only (vs. positive or negative)

Out: C (n-K) Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_varwind_gaussnull_connex(X, K, XYZ, p=inf, stop=False, verbose=False)

Random threshold with fixed-window and gaussian null distribution, using connexity constraint on non-null set. In: X (n,): Observations (assumed Gaussian under H0)

XYZ (3,n): voxel coordinates K <int>: Some positive integer (lower bound on the number of null hypotheses) p <float>: Lp norm stop <bool>: Stop when minimum is attained (save computation time)

Out: C (n-K): Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_varwind_knownull(X, K, p=inf, stop=False, verbose=False)

Random threshold with varying window and known null distribution In: X (n,): Observations (Exp(1) under H0)

K <int>: Some positive integer (lower bound on the number of null hypotheses) p <float>: lp norm stop <bool>: Stop when minimum is attained (save computation time)

Out: C (n-K): Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.randthresh_varwind_knownull_connex(X, K, XYZ, p=inf, stop=False, verbose=False)

Random threshold with varying window and known null distribution In: X (n,): Observations (Exp(1) under H0)

K <int>: Some positive integer (lower bound on the number of null hypotheses) XYZ (3,n): voxel coordinates p <float>: lp norm stop <bool>: Stop when minimum is attained (save computation time)

Out: C (n-K): Lp norm of partial sums fluctuation about their conditional expectation

nipy.neurospin.utils.random_threshold.test_stat(X, p=inf)

Test statistic of global null hypothesis that all observations have zero-mean In: X (n,) : X[j] = -log(1-F(|Y[j]|)) where F: cdf of |Y[j]| under null hypothesis

(must be computed beforehand)

p : Lp norm (<= inf) to use for computing test statistic

Out: D <float> : test statistic