STEM 隨筆︰古典力學︰模擬術【小工具】八《大數據》三

道德經‧六十四章

其安易持,其未兆易謀。其脆易泮,其微易散。為之於未有,治之於未亂。合抱之木,生於毫末;九層之臺,起於累土;千里之行,始於足下。為者敗之,執者失之。是以聖人無為故無敗;無執故無失。民之從事,常於幾成而敗之。慎終如始,則無敗事,是以聖人欲不欲,不貴難得之貨;學不學,復衆人之所過,以輔萬物之自然 ,而不敢為。

『重複』 repeat 之概念深矣。積步能至千里,故而易曰︰

馴致其道,至堅冰也。

︰初六:履霜,堅冰至。
象傳︰履霜堅冰,陰始凝也。馴致其道,至堅冰也。

怎麼體會『簡單行為』之『重複』所產生的『壯觀現象』耶?

─── 《L4K ︰ PYTHON TURTLE《二》

 

今天如果有人讀著『履霜堅冰』,是否會想『一葉知秋』的『可能性』呢?

比方問道︰一位已知『大數法則』的人,能否免於『賭徒謬誤』的呢??

就像講︰如何決定所謂『大數』是『多大之數』也!

Law of large numbers

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the law only applies (as the name indicates) when a large number of observations is considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be “balanced” by the others (see the gambler’s fallacy).

An illustration of the law of large numbers using a particular run of rolls of a single die. As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. While different runs would show a different shape over a small number of throws (at the left), over a large number of rolls (to the right) they would be extremely similar.

 

思所學者,或可借『工具』模擬,自己定奪『 □ ○』 哩!!

 

或將判斷『飄香十里』

Diffusion is an example of the law of large numbers, applied to chemistry. Initially, there are solute molecules on the left side of a barrier (purple line) and none on the right. The barrier is removed, and the solute diffuses to fill the whole container.
Top: With a single molecule, the motion appears to be quite random.
Middle: With more molecules, there is clearly a trend where the solute fills the container more and more uniformly, but there are also random fluctuations.
Bottom: With an enormous number of solute molecules (too many to see), the randomness is essentially gone: The solute appears to move smoothly and systematically from high-concentration areas to low-concentration areas. In realistic situations, chemists can describe diffusion as a deterministic macroscopic phenomenon (see Fick’s laws), despite its underlying random nature.

 

,蓋『文字誇大』矣!!??

故不必發生『有窮』──  n 小於某自然數 ── 至『無限』── n \to \infty ── 『邏輯』的『推導』之誤解呦︰

Forms

Two different versions of the law of large numbers are described below; they are called the strong law of large numbers, and the weak law of large numbers. Stated for the case where X1, X2, … is an infinite sequence ofi.i.d. Lebesgue integrable random variables with expected value E(X1) = E(X2) = …= µ, both versions of the law state that – with virtual certainty – the sample average

\displaystyle {\overline {X}}_{n}={\frac {1}{n}}(X_{1}+\cdots +X_{n})

converges to the expected value
\displaystyle {\begin{matrix}{}\\{\overline {X}}_{n}\,\to \,\mu \qquad {\textrm {for}}\qquad n\to \infty ,\\{}\end{matrix}}   (law. 1)

(Lebesgue integrability of Xj means that the expected value E(Xj) exists according to Lebesgue integration and is finite.)

An assumption of finite variance Var(X1) = Var(X2) = … = σ2 < ∞ is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway. This assumption is often used because it makes the proofs easier and shorter.

Mutual independence of the random variables can be replaced by pairwise independence in both versions of the law.[8]

The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables.

……

Proof of the weak law

Given X1, X2, … an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = … = µ < ∞, we are interested in the convergence of the sample average

\displaystyle {\overline {X}}_{n}={\tfrac {1}{n}}(X_{1}+\cdots +X_{n}).

The weak law of large numbers states:
Theorem: \displaystyle {\begin{matrix}{}\\{\overline {X}}_{n}\ {\xrightarrow {P}}\ \mu \qquad {\textrm {when}}\ n\to \infty .\\{}\end{matrix}}   (law. 2)

Proof using Chebyshev’s inequality assuming finite variance

This proof uses the assumption of finite variance \displaystyle \operatorname {Var} (X_{i})=\sigma ^{2} (for all \displaystyle i). The independence of the random variables implies no correlation between them, and we have that

\displaystyle \operatorname {Var} ({\overline {X}}_{n})=\operatorname {Var} ({\tfrac {1}{n}}(X_{1}+\cdots +X_{n}))={\frac {1}{n^{2}}}\operatorname {Var} (X_{1}+\cdots +X_{n})={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}.

The common mean μ of the sequence is the mean of the sample average:
\displaystyle E({\overline {X}}_{n})=\mu .
Using Chebyshev’s inequality on X¯n{\displaystyle {\overline {X}}_{n}}{\overline {X}}_{n} results in
\displaystyle \operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\leq {\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.
This may be used to obtain the following:
\displaystyle \operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|<\varepsilon )=1-\operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\geq 1-{\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.
As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained
\displaystyle {\begin{matrix}{}\\{\overline {X}}_{n}\ {\xrightarrow {P}}\ \mu \qquad {\textrm {when}}\ n\to \infty .\\{}\end{matrix}}  

 

那麼焉能『一試定終生』耶??!!

要是人實如『單面骰子』,又沒有習練能變性,名次怕早注定的阿★☆