gammainc(z, a=0, b=inf) computes the (generalized) incomplete
gamma function with integration limits :
The generalized incomplete gamma function reduces to the following special cases when one or both endpoints are fixed:
Of course, we have
for all
and
.
Note however that some authors reverse the order of the arguments when defining the lower and upper incomplete gamma function, so one should be careful to get the correct definition.
If also given the keyword argument regularized=True, gammainc() computes the “regularized” incomplete gamma function
Examples
We can compare with numerical quadrature to verify that gammainc() computes the integral in the definition:
>>> from mpmath import *
>>> mp.dps = 20
>>> print gammainc(2+3j, 4, 10)
(0.009772126686277051606 - 0.077063730631298989245j)
>>> print quad(lambda t: t**(2+3j-1) * exp(-t), [4, 10])
(0.009772126686277051606 - 0.077063730631298989245j)
The incomplete gamma functions satisfy simple recurrence relations:
>>> mp.dps = 15
>>> z = 3.5
>>> a = 2
>>> print gammainc(z+1, a), z*gammainc(z,a) + a**z*exp(-a)
10.6013029693353 10.6013029693353
>>> print gammainc(z+1,0,a), z*gammainc(z,0,a) - a**z*exp(-a)
1.03042542723211 1.03042542723211
If is an integer, the recurrence reduces the incomplete gamma
function to
where
and
are polynomials:
>>> mp.dps = 15
>>> print gammainc(1, 2), exp(-2)
0.135335283236613 0.135335283236613
>>> mp.dps = 50
>>> identify(gammainc(6, 1, 2), ['exp(-1)', 'exp(-2)'])
'(326*exp(-1) + (-872)*exp(-2))'
The incomplete gamma functions reduce to functions such as the exponential integral Ei and the error function for special arguments:
>>> mp.dps = 15
>>> print gammainc(0, 4), -ei(-4)
0.00377935240984891 0.00377935240984891
>>> print gammainc(0.5, 0, 2), sqrt(pi)*erf(sqrt(2))
1.6918067329452 1.6918067329452
Computes the exponential integral or Ei-function, .
The exponential integral is defined as
When the integration range includes , the exponential
integral is interpreted as providing the Cauchy principal value.
For real , the Ei-function behaves roughly like
.
This function should not be confused with the family of related
functions denoted by which are also called “exponential
integrals”.
Basic examples
Some basic values and limits are:
>>> from mpmath import *
>>> mp.dps = 15
>>> print ei(0)
-inf
>>> print ei(1)
1.89511781635594
>>> print ei(inf)
+inf
>>> print ei(-inf)
0.0
For , the defining integral can be evaluated
numerically as a reference:
>>> print ei(-4)
-0.00377935240984891
>>> print quad(lambda t: exp(t)/t, [-inf, -4])
-0.00377935240984891
ei() supports complex arguments and arbitrary precision evaluation:
>>> mp.dps = 50
>>> mp.dps = 50
>>> print ei(pi)
10.928374389331410348638445906907535171566338835056
>>> mp.dps = 25
>>> print ei(3+4j)
(-4.154091651642689822535359 + 4.294418620024357476985535j)
Related functions
The exponential integral is closely related to the logarithmic integral. See li() for additional information.
The exponential integral is related to the hyperbolic and trigonometric integrals (see chi(), shi(), ci(), si()) similarly to how the ordinary exponential function is related to the hyperbolic and trigonometric functions:
>>> mp.dps = 15
>>> print ei(3)
9.93383257062542
>>> print chi(3) + shi(3)
9.93383257062542
>>> print ci(3j) - j*si(3j) - pi*j/2
(9.93383257062542 + 0.0j)
Beware that logarithmic corrections, as in the last example above, are required to obtain the correct branch in general. For details, see [1].
The exponential integral is also a special case of the
hypergeometric function :
>>> z = 0.6
>>> print z*hyper([1,1],[2,2],z) + (ln(z)-ln(1/z))/2 + euler
0.769881289937359
>>> print ei(z)
0.769881289937359
For x large enough use the asymptotic expansion ei_as(x) = exp(x)/x * Sum(k!/x^k, (k,0,inf)) k!/x^k goes as exp(f(k)) f(k) = k*log(k/(x*e)) + log(k)/2, with extremal point in log(k/x) + 1/(2*k) = 0; therefore the smallest term of the asympotic series is k!/x^k ~= e^(-k - 1/2) requiring this to be equal to e^-prec one gets x ~= k ~= prec*log(2) so that one should use ei_as(x) for x > prec*log(2)
References
Computes the logarithmic integral or li-function
, defined by
The logarithmic integral has a singularity at .
Note that there is a second logarithmic integral, the Li function, defined by
This “offset logarithmic integral” can be computed via
li() using the simple identity
.
The logarithmic integral should also not be confused with the polylogarithm (also denoted by Li), which is implemented as polylog().
Examples
Some basic values and limits:
>>> from mpmath import *
>>> mp.dps = 30
>>> print li(0)
0.0
>>> print li(1)
-inf
>>> print li(1)
-inf
>>> print li(2)
1.04516378011749278484458888919
>>> print findroot(li, 2)
1.45136923488338105028396848589
>>> print li(inf)
+inf
The logarithmic integral can be evaluated for arbitrary complex arguments:
>>> mp.dps = 20
>>> print li(3+4j)
(3.1343755504645775265 + 2.6769247817778742392j)
The logarithmic integral is related to the exponential integral:
>>> print ei(log(3))
2.1635885946671919729
>>> print li(3)
2.1635885946671919729
The logarithmic integral grows like :
>>> mp.dps = 15
>>> x = 10**100
>>> print x/log(x)
4.34294481903252e+97
>>> print li(x)
4.3619719871407e+97
The prime number theorem states that the number of primes less
than is asymptotic to
. For example,
it is known that there are exactly 1,925,320,391,606,803,968,923
prime numbers less than
[1]. The logarithmic integral
provides a very accurate estimate:
>>> print li(2) + li(10**23)
1.92532039161405e+21
A definite integral is:
>>> print quad(li, [0, 1])
-0.693147180559945
>>> print -ln(2)
-0.693147180559945
References
Computes the cosine integral,
Examples
Some values and limits:
>>> from mpmath import *
>>> mp.dps = 25
>>> print ci(0)
-inf
>>> print ci(1)
0.3374039229009681346626462
>>> print ci(pi)
0.07366791204642548599010096
>>> print ci(inf)
0.0
>>> print ci(-inf)
(0.0 + 3.141592653589793238462643j)
>>> print ci(2+3j)
(1.408292501520849518759125 - 2.983617742029605093121118j)
The cosine integral behaves roughly like the sinc function
(see sinc()) for large real :
>>> print ci(10**10)
-4.875060251748226537857298e-11
>>> print sinc(10**10)
-4.875060250875106915277943e-11
>>> print chop(limit(ci, inf))
0.0
It has infinitely many roots on the positive real axis:
>>> print findroot(ci, 1)
0.6165054856207162337971104
>>> print findroot(ci, 2)
3.384180422551186426397851
We can evaluate the defining integral as a reference:
>>> mp.dps = 15
>>> print -quadosc(lambda t: cos(t)/t, [5, inf], omega=1)
-0.190029749656644
>>> print ci(5)
-0.190029749656644
Some infinite series can be evaluated using the cosine integral:
>>> print nsum(lambda k: (-1)**k/(fac(2*k)*(2*k)), [1,inf])
-0.239811742000565
>>> print ci(1) - euler
-0.239811742000565
Computes the sine integral,
The sine integral is thus the antiderivative of the sinc function (see sinc()).
Examples
Some values and limits:
>>> from mpmath import *
>>> mp.dps = 25
>>> print si(0)
0.0
>>> print si(1)
0.9460830703671830149413533
>>> print si(-1)
-0.9460830703671830149413533
>>> print si(pi)
1.851937051982466170361053
>>> print si(inf)
1.570796326794896619231322
>>> print si(-inf)
-1.570796326794896619231322
>>> print si(2+3j)
(4.547513889562289219853204 + 1.399196580646054789459839j)
The sine integral approaches for large real
:
>>> print si(10**10)
1.570796326707584656968511
>>> print pi/2
1.570796326794896619231322
We can evaluate the defining integral as a reference:
>>> mp.dps = 15
>>> print quad(sinc, [0, 5])
1.54993124494467
>>> print si(5)
1.54993124494467
Some infinite series can be evaluated using the sine integral:
>>> print nsum(lambda k: (-1)**k/(fac(2*k+1)*(2*k+1)), [0,inf])
0.946083070367183
>>> print si(1)
0.946083070367183
Computes the hyperbolic cosine integral, defined in analogy with the cosine integral (see ci()) as
Some values and limits:
>>> from mpmath import *
>>> mp.dps = 25
>>> print chi(0)
-inf
>>> print chi(1)
0.8378669409802082408946786
>>> print chi(inf)
+inf
>>> print findroot(chi, 0.5)
0.5238225713898644064509583
>>> print chi(2+3j)
(-0.1683628683277204662429321 + 2.625115880451325002151688j)
Computes the hyperbolic sine integral, defined in analogy with the sine integral (see si()) as
Some values and limits:
>>> from mpmath import *
>>> mp.dps = 25
>>> print shi(0)
0.0
>>> print shi(1)
1.057250875375728514571842
>>> print shi(-1)
-1.057250875375728514571842
>>> print shi(inf)
+inf
>>> print shi(2+3j)
(-0.1931890762719198291678095 + 2.645432555362369624818525j)
Computes the error function, . The error
function is the normalized antiderivative of the Gaussian function
. More precisely,
Basic examples
Simple values and limits include:
>>> from mpmath import *
>>> mp.dps = 15
>>> print erf(0)
0.0
>>> print erf(1)
0.842700792949715
>>> print erf(-1)
-0.842700792949715
>>> print erf(inf)
1.0
>>> print erf(-inf)
-1.0
For large real ,
approaches 1 very
rapidly:
>>> print erf(3)
0.999977909503001
>>> print erf(5)
0.999999999998463
The error function is an odd function:
>>> nprint(chop(taylor(erf, 0, 5)))
[0.0, 1.12838, 0.0, -0.376126, 0.0, 0.112838]
erf() implements arbitrary-precision evaluation and supports complex numbers:
>>> mp.dps = 50
>>> print erf(0.5)
0.52049987781304653768274665389196452873645157575796
>>> mp.dps = 25
>>> print erf(1+j)
(1.316151281697947644880271 + 0.1904534692378346862841089j)
Related functions
See also erfc(), which is more accurate for large ,
and erfi() which gives the antiderivative of
.
The Fresnel integrals fresnels() and fresnelc() are also related to the error function.
Computes the complementary error function,
.
This function avoids cancellation that occurs when naively
computing the complementary error function as 1-erf(x):
>>> from mpmath import *
>>> mp.dps = 15
>>> print 1 - erf(10)
0.0
>>> print erfc(10)
2.08848758376254e-45
erfc() works accurately even for ludicrously large arguments:
>>> print erfc(10**10)
4.3504398860243e-43429448190325182776
Computes the imaginary error function, .
The imaginary error function is defined in analogy with the
error function, but with a positive sign in the integrand:
Whereas the error function rapidly converges to 1 as grows,
the imaginary error function rapidly diverges to infinity.
The functions are related as
for all complex
numbers
.
Examples
Basic values and limits:
>>> from mpmath import *
>>> mp.dps = 15
>>> print erfi(0)
0.0
>>> print erfi(1)
1.65042575879754
>>> print erfi(-1)
-1.65042575879754
>>> print erfi(inf)
+inf
>>> print erfi(-inf)
-inf
Note the symmetry between erf and erfi:
>>> print erfi(3j)
(0.0 + 0.999977909503001j)
>>> print erf(3)
0.999977909503001
>>> print erf(1+2j)
(-0.536643565778565 - 5.04914370344703j)
>>> print erfi(2+1j)
(-5.04914370344703 - 0.536643565778565j)
Possible issues
The current implementation of erfi() is much less efficient and accurate than the one for erf.
Computes the inverse error function, satisfying
This function is defined only for .
Examples
Special values include:
>>> from mpmath import *
>>> mp.dps = 15
>>> print erfinv(0)
0.0
>>> print erfinv(1)
+inf
>>> print erfinv(-1)
-inf
The domain is limited to the standard interval:
>>> erfinv(2)
Traceback (most recent call last):
...
ValueError: erfinv(x) is defined only for -1 <= x <= 1
It is simple to check that erfinv() computes inverse values of erf() as promised:
>>> print erf(erfinv(0.75))
0.75
>>> print erf(erfinv(-0.995))
-0.995
erfinv() supports arbitrary-precision evaluation:
>>> mp.dps = 50
>>> erf(3)
mpf('0.99997790950300141455862722387041767962015229291260075')
>>> erfinv(_)
mpf('3.0')
A definite integral involving the inverse error function:
>>> mp.dps = 15
>>> print quad(erfinv, [0, 1])
0.564189583547756
>>> print 1/sqrt(pi)
0.564189583547756
The inverse error function can be used to generate random numbers with a Gaussian distribution (although this is a relatively inefficient algorithm):
>>> nprint([erfinv(2*rand()-1) for n in range(6)]) # doctest: +SKIP
[-0.586747, 1.10233, -0.376796, 0.926037, -0.708142, -0.732012]
npdf(x, mu=0, sigma=1) evaluates the probability density
function of a normal distribution with mean value
and variance
.
Elementary properties of the probability distribution can be verified using numerical integration:
>>> from mpmath import *
>>> mp.dps = 15
>>> print quad(npdf, [-inf, inf])
1.0
>>> print quad(lambda x: npdf(x, 3), [3, inf])
0.5
>>> print quad(lambda x: npdf(x, 3, 2), [3, inf])
0.5
See also ncdf(), which gives the cumulative distribution.
ncdf(x, mu=0, sigma=1) evaluates the cumulative distribution
function of a normal distribution with mean value
and variance
.
See also npdf(), which gives the probability density.
Elementary properties include:
>>> from mpmath import *
>>> mp.dps = 15
>>> print ncdf(pi, mu=pi)
0.5
>>> print ncdf(-inf)
0.0
>>> print ncdf(+inf)
1.0
The cumulative distribution is the integral of the density function having identical mu and sigma:
>>> mp.dps = 15
>>> print diff(ncdf, 2)
0.053990966513188
>>> print npdf(2)
0.053990966513188
>>> print diff(lambda x: ncdf(x, 1, 0.5), 0)
0.107981933026376
>>> print npdf(0, 1, 0.5)
0.107981933026376
Computes the Fresnel sine integral
Note that some sources define this function
without the normalization factor .
Examples
Some basic values and limits:
>>> from mpmath import *
>>> mp.dps = 25
>>> print fresnels(0)
0.0
>>> print fresnels(inf)
0.5
>>> print fresnels(-inf)
-0.5
>>> print fresnels(1)
0.4382591473903547660767567
>>> print fresnels(1+2j)
(36.72546488399143842838788 + 15.58775110440458732748279j)
Comparing with the definition:
>>> print fresnels(3)
0.4963129989673750360976123
>>> print quad(lambda t: sin(pi*t**2/2), [0,3])
0.4963129989673750360976123
Computes the Fresnel cosine integral
Note that some sources define this function
without the normalization factor .
Examples
Some basic values and limits:
>>> from mpmath import *
>>> mp.dps = 25
>>> print fresnelc(0)
0.0
>>> print fresnelc(inf)
0.5
>>> print fresnelc(-inf)
-0.5
>>> print fresnelc(1)
0.7798934003768228294742064
>>> print fresnelc(1+2j)
(16.08787137412548041729489 - 36.22568799288165021578758j)
Comparing with the definition:
>>> print fresnelc(3)
0.6057207892976856295561611
>>> print quad(lambda t: cos(pi*t**2/2), [0,3])
0.6057207892976856295561611