時間序列︰生成函數《七》

如何理解『機率母函數』的『定義』呢?

Univariate case

If X is a discrete random variable taking values in the non-negative integers {0,1, …}, then the probability generating function of X is defined as [1]

G(z)=\operatorname {E} (z^{X})=\sum _{x=0}^{\infty }p(x)z^{x},

where p is the probability mass function of X. Note that the subscripted notations GX and pX are often used to emphasize that these pertain to a particular random variable X, and to its distribution. The power series converges absolutely at least for all complex numbers z with |z| ≤ 1; in many examples the radius of convergence is larger.

 

其實 G_{X}(z) {\equiv}_{df} \ E( z^X) ,就是說它是用『隨機變數』 X 在『期望值算符』 E 作用於 z^X 下來『定義』的。所以依著『期望值算符』的『定義』

Univariate discrete random variable, finite case

Suppose random variable X can take value x1 with probability p1, value x2 with probability p2, and so on, up to value xk with probability pk. Then the expectation of this random variable X is defined as

{\displaystyle \operatorname {E} [X]=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}\;.}

Since all probabilities pi add up to one (p1 + p2 + … + pk = 1), the expected value can be viewed as the weighted average, with pi’s being the weights:

\operatorname {E} [X]={\frac {x_{1}p_{1}+x_{2}p_{2}+\dotsb +x_{k}p_{k}}{1}}={\frac {x_{1}p_{1}+x_{2}p_{2}+\dotsb +x_{k}p_{k}}{p_{1}+p_{2}+\dotsb +p_{k}}}\;.

If all outcomes xi are equally likely (that is, p1 = p2 = … = pk), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes xi are not equally probable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value of X is what one expects to happen on average.

 

,得到了 \sum \limits_{n=0}^{\infty} P(X = n) z^n 這個形式。既與『期望值』相干,期待『數值解析』耶?由於 G_X(1) = E(1^X) = E(1) = \sum \limits_{n=0}^{\infty} P(X=n) = 1 ,可知 G_X(z)|z| \leq 1 時皆『收斂』乎??並非『比值審斂法』不足據!

Ratio test

In mathematics, the ratio test is a test (or “criterion”) for the convergence of a series

\sum _{{n=1}}^{\infty }a_{n},

where each term is a real or complex number and an is nonzero when n is large. The test was first published by Jean le Rond d’Alembert and is sometimes known as d’Alembert’s ratio test or as the Cauchy ratio test.[1]

The test

 

Decision diagram for the ratio test

The usual form of the test makes use of the limit

L=\lim _{{n\to \infty }}\left|{\frac {a_{{n+1}}}{a_{n}}}\right|.
     
 
(1)

The ratio test states that:

  • if L < 1 then the series converges absolutely;
  • if L > 1 then the series is divergent;
  • if L = 1 or the limit fails to exist, then the test is inconclusive, because there exist both convergent and divergent series that satisfy this case.

It is possible to make the ratio test applicable to certain cases where the limit L fails to exist, if limit superior and limit inferior are used. The test criteria can also be refined so that the test is sometimes conclusive even when L = 1. More specifically, let

  R = \lim\sup \left|\frac{a_{n+1}}{a_n}\right|
  r = \lim\inf \left|\frac{a_{n+1}}{a_n}\right|.

Then the ratio test states that:[2][3]

  • if R < 1, the series converges absolutely;
  • if r > 1, the series diverges;
  • if  \left|\frac{a_{n+1}}{a_n}\right|\ge 1 for all large n (regardless of the value of r), the series also diverges; this is because  |a_n| is nonzero and increasing and hence an does not approach zero;
  • the test is otherwise inconclusive.

If the limit L in (1) exists, we must have L = R = r. So the original ratio test is a weaker version of the refined one.

……

Proof

 

In this example, the ratio of adjacent terms in the blue sequence converges to L=1/2. We choose r = (L+1)/2 = 3/4. Then the blue sequence is dominated by the red sequence rk for all n ≥ 2. The red sequence converges, so the blue sequence does as well.

Below is a proof of the validity of the original ratio test.

Suppose that  L=\lim _{{n\to \infty }}\left|{\frac {a_{{n+1}}}{a_{{n}}}}\right|<1. We can then show that the series converges absolutely by showing that its terms will eventually become less than those of a certain convergent geometric series. To do this, let  r = \frac{L+1}{2}. Then r is strictly between L and 1, and  |a_{n+1}| < r |a_{n}| for sufficiently large n (say, n greater than N). Hence  |a_{n+i}| < r^i|a_{n}| for each n > N and i > 0, and so

  \sum _{{i=N+1}}^{{\infty }}|a_{i}|=\sum _{{i=1}}^{{\infty }}\left|a_{{N+i}}\right|<\sum _{{i=1}}^{{\infty }}r^{{i}}|a_{{N+1}}|=|a_{{N+1}}|\sum _{{i=1}}^{{\infty }}r^{{i}}=|a_{{N+1}}|{\frac {r}{1-r}}<\infty .

That is, the series converges absolutely.

On the other hand, if L > 1, then  |a_{n+1}| > |a_{n}| for sufficiently large n, so that the limit of the summands is non-zero. Hence the series diverges.

───

 

,無窮『幾何級數』簡易明!!

Infinite geometric series

An infinite geometric series is an infinite series whose successive terms have a common ratio. Such a series converges if and only if the absolute value of the common ratio is less than one (| r | < 1). Its value can then be computed from the finite sum formulae

\sum _{k=0}^{\infty }ar^{k}=\lim _{n\to \infty }{\sum _{k=0}^{n}ar^{k}}=\lim _{n\to \infty }{\frac {a(1-r^{n+1})}{1-r}}={\frac {a}{1-r}}-\lim _{n\to \infty }{\frac {ar^{n+1}}{1-r}}

 

Diagram showing the geometric series 1 + 1/2 + 1/4 + 1/8 + ⋯ which converges to 2.

Since:

r^{n+1}\to 0{\mbox{ as }}n\to \infty {\mbox{ when }}|r|<1.

Then:

\sum _{k=0}^{\infty }ar^{k}={\frac {a}{1-r}}-0={\frac {a}{1-r}}

For a series containing only even powers of  r,

  \sum _{k=0}^{\infty }ar^{2k}={\frac {a}{1-r^{2}}}

and for odd powers only,

\sum _{k=0}^{\infty }ar^{2k+1}={\frac {ar}{1-r^{2}}}

In cases where the sum does not start at k = 0,

  \sum _{k=m}^{\infty }ar^{k}={\frac {ar^{m}}{1-r}}

The formulae given above are valid only for | r | < 1. The latter formula is valid in every Banach algebra, as long as the norm of r is less than one, and also in the field of p-adic numbers if | r |p < 1. As in the case for a finite sum, we can differentiate to calculate formulae for related sums. For example,

{\displaystyle {\frac {d}{dr}}\sum _{k=0}^{\infty }r^{k}=\sum _{k=1}^{\infty }kr^{k-1}={\frac {1}{(1-r)^{2}}}}

This formula only works for | r | < 1 as well. From this, it follows that, for | r | < 1,

\sum _{k=0}^{\infty }kr^{k}={\frac {r}{\left(1-r\right)^{2}}}\,;\,\sum _{k=0}^{\infty }k^{2}r^{k}={\frac {r\left(1+r\right)}{\left(1-r\right)^{3}}}\,;\,\sum _{k=0}^{\infty }k^{3}r^{k}={\frac {r\left(1+4r+r^{2}\right)}{\left(1-r\right)^{4}}}

Also, the infinite series 1/2 + 1/4 + 1/8 + 1/16 + ⋯ is an elementary example of a series that converges absolutely.

It is a geometric series whose first term is 1/2 and whose common ratio is 1/2, so its sum is

{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+\cdots ={\frac {1/2}{1-(+1/2)}}=1.

The inverse of the above series is 1/2 − 1/4 + 1/8 − 1/16 + ⋯ is a simple example of an alternating series that converges absolutely.

It is a geometric series whose first term is 1/2 and whose common ratio is −1/2, so its sum is

{\frac {1}{2}}-{\frac {1}{4}}+{\frac {1}{8}}-{\frac {1}{16}}+\cdots ={\frac {1/2}{1-(-1/2)}}={\frac {1}{3}}.

Complex numbers

The summation formula for geometric series remains valid even when the common ratio is a complex number. In this case the condition that the absolute value of r be less than 1 becomes that the modulus of r be less than 1. It is possible to calculate the sums of some non-obvious geometric series. For example, consider the proposition

\sum _{k=0}^{\infty }{\frac {\sin(kx)}{r^{k}}}={\frac {r\sin(x)}{1+r^{2}-2r\cos(x)}}

The proof of this comes from the fact that

  \sin(kx)={\frac {e^{ikx}-e^{-ikx}}{2i}},

which is a consequence of Euler’s formula. Substituting this into the original series gives

\sum _{k=0}^{\infty }{\frac {\sin(kx)}{r^{k}}}={\frac {1}{2i}}\left[\sum _{k=0}^{\infty }\left({\frac {e^{ix}}{r}}\right)^{k}-\sum _{k=0}^{\infty }\left({\frac {e^{-ix}}{r}}\right)^{k}\right].

This is the difference of two geometric series, and so it is a straightforward application of the formula for infinite geometric series that completes the proof.

 

故而此『隨機變數』 X 的『期望值』記作 E(X) = G^{'}(1^{-})

Probabilities and expectations

The following properties allow the derivation of various basic quantities related to X:

1. The probability mass function of X is recovered by taking derivatives of G

p(k)=\operatorname {Pr} (X=k)={\frac {G^{(k)}(0)}{k!}}.

2. It follows from Property 1 that if random variables X and Y have probability generating functions that are equal, GX = GY, then pX = pY. That is, if X and Y have identical probability generating functions, then they have identical distributions.

3. The normalization of the probability density function can be expressed in terms of the generating function by

\operatorname {E} (1)=G(1^{-})=\sum _{i=0}^{\infty }f(i)=1.

The expectation of X is given by

  \operatorname {E} \left(X\right)=G'(1^{-}).

More generally, the kth factorial moment, {\textrm {E}}(X(X-1)\cdots (X-k+1)) of X is given by

{\textrm {E}}\left({\frac {X!}{(X-k)!}}\right)=G^{(k)}(1^{-}),\quad k\geq 0.

So the variance of X is given by

\operatorname {Var} (X)=G''(1^{-})+G'(1^{-})-\left[G'(1^{-})\right]^{2}.

4. G_{X}(e^{t})=M_{X}(t) where X is a random variable,  G_{X}(t) is the probability generating function (of X) and  M_{X}(t) is the moment-generating function (of X) .

 

,論述邏輯之嚴謹不得不然也!!??