時間序列︰生成函數‧漸近展開︰白努利 □○《一》

雖然只想說說『漸近展開』之古法

歐拉-麥克勞林求和公式

歐拉-麥克勞林求和公式在1735年由萊昂哈德·歐拉科林·麥克勞林分別獨立發現,該公式提供了一個聯繫積分與求和的方法,由此可以導出一些漸進展開式。

公式

[1] {\begin{smallmatrix}f(x)\end{smallmatrix}}為一至少  {\begin{smallmatrix}k+1\end{smallmatrix}}階可微的函數,  {\begin{smallmatrix}a,b\in {\mathbb {Z}}\end{smallmatrix}},則
{\begin{aligned}\sum _{{a<n\leq b}}f(n)&=\int _{{a}}^{{b}}f(t)\,{\mathrm {d}}t\\&\quad +\sum _{{r=0}}^{{k}}{\frac {(-1)^{{r+1}}B_{{r+1}}}{(r+1)!}}\cdot (f^{{(r)}}(b)-f^{{(r)}}(a))\\&\quad +{\frac {(-1)^{k}}{(k+1)!}}\int _{{a}}^{{b}}{\bar {B}}_{{k+1}}(t)f^{{(k+1)}}(t)\\\end{aligned}}
其中

 

的來歷,實在為難也!並非三百年前已太久,怎知歐拉是否是午夜夢回而得?還是千迴百轉積累至?但要講這個公式,就得從白努利數講起??!!又誰曉雅各布·白努利如何想來!!??

Bernoulli number

In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers with deep connections to number theory. The values of the first few Bernoulli numbers are

B0 = 1, B±
1 = ±1/2
, B2 = 1/6, B3 = 0, B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0, B8 = −1/30.

The superscript ± is used by this article to designate the two sign conventions for Bernoulli numbers. They differ only in the sign of the n = 1 term:

  • B
    n
    are the first Bernoulli numbers (OEISA027641 / OEISA027642), and is the one prescribed by NIST. In this convention, B
    1 = −1/2
    .
  • B+
    n
    are the second Bernoulli numbers (OEISA164555 / OEISA027642), which are also called the “original Bernoulli numbers”.[1] In this convention, B+
    1 = +1/2
    .

Since Bn = 0 for all odd n > 1, and many formulas only involve even-index Bernoulli numbers, some authors write “Bn” to mean B2n. This article does not follow this notation.

The Bernoulli numbers appear in the Taylor series expansions of the tangent and hyperbolic tangent functions, in formulas for the sum of powers of the first positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.

The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jakob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Kōwa. Seki’s discovery was posthumously published in 1712[2][3] in his work Katsuyo Sampo; Bernoulli’s, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace‘s note G on the analytical engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage‘s machine.[4] As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program.

Sum of powers

Main article: Faulhaber’s formula

Bernoulli numbers feature prominently in the closed form expression of the sum of the mth powers of the first n positive integers. For m, n ≥ 0 define

{\displaystyle S_{m}(n)=\sum _{k=1}^{n}k^{m}=1^{m}+2^{m}+\cdots +n^{m}.}

This expression can always be rewritten as a polynomial in n of degree m + 1. The coefficients of these polynomials are related to the Bernoulli numbers by Bernoulli’s formula:

{\displaystyle S_{m}(n)={\frac {1}{m+1}}\sum _{k=0}^{m}{\binom {m+1}{k}}B_{k}^{+}n^{m+1-k},}

where (m + 1

k) denotes the binomial coefficient.

For example, taking m to be 1 gives the triangular numbers 0, 1, 3, 6, … OEISA000217.

{\displaystyle 1+2+\cdots +n={\frac {1}{2}}\left(B_{0}n^{2}+2B_{1}^{+}n^{1}\right)={\tfrac {1}{2}}\left(n^{2}+n\right).}

Taking m to be 2 gives the square pyramidal numbers 0, 1, 5, 14, … OEISA000330.

{\displaystyle 1^{2}+2^{2}+\cdots +n^{2}={\frac {1}{3}}\left(B_{0}n^{3}+3B_{1}^{+}n^{2}+3B_{2}n^{1}\right)={\tfrac {1}{3}}\left(n^{3}+{\tfrac {3}{2}}n^{2}+{\tfrac {1}{2}}n\right).}

Some authors use the alternate convention for Bernoulli numbers and state Bernoulli’s formula in this way:

{\displaystyle S_{m}(n)={\frac {1}{m+1}}\sum _{k=0}^{m}(-1)^{k}{\binom {m+1}{k}}B_{k}^{-}n^{m+1-k}.}

Bernoulli’s formula is sometimes called Faulhaber’s formula after Johann Faulhaber who also found remarkable ways to calculate sums of powers.

Faulhaber’s formula was generalized by V. Guo and J. Zeng to a q-analog (Guo & Zeng 2005).

 

且順著歷史之軌跡︰

Faulhaber’s formula

Johann Faulhaber (5 May 1580 – 10 September 1635) was a German mathematician.

Born in Ulm, Faulhaber was a trained weaver who later took the role of a surveyor of the city of Ulm. He collaborated with Johannes Kepler and Ludolph van Ceulen. Besides his work on the fortifications of cities (notably Basel and Frankfurt), Faulhaber built water wheels in his home town and geometrical instruments for the military. Faulhaber made the first publication of Henry Briggs’s Logarithm in Germany. He died in Ulm.

Faulhaber’s major contribution was in calculating the sums of powers of integers. Jacob Bernoulli makes references to Faulhaber in his Ars Conjectandi.

In mathematics, Faulhaber’s formula, named after Johann Faulhaber, expresses the sum of the p-th powers of the first n positive integers

\sum_{k=1}^n k^p = 1^p + 2^p + 3^p + \cdots + n^p

as a (p + 1)th-degree polynomial function of n, the coefficients involving Bernoulli numbers Bj.

The formula says

  \sum_{k=1}^n k^p = {1 \over p+1} \sum_{j=0}^p (-1)^j{p+1 \choose j} B_j n^{p+1-j},\qquad \mbox{where}~B_1 = -\frac{1}{2}.

For example, the case p = 1 is

{\displaystyle 1+2+3+\cdots +n={1 \over 2}\sum _{j=0}^{1}(-1)^{j}{2 \choose j}B_{j}n^{2-j}}

{\displaystyle ={1 \over 2}\left(B_{0}n^{2}-2B_{1}n\right)={1 \over 2}\left(n^{2}+n\right).}

Faulhaber himself did not know the formula in this form, but only computed the first seventeen polynomials; the general form was established with the discovery of the Bernoulli numbers (see History section below). The derivation of Faulhaber’s formula is available in The Book of Numbers by John Horton Conway and Richard K. Guy.[1]

There is also a similar (but somehow simpler) expression: using the idea of telescoping and the binomial theorem, one gets Pascal‘s identity:[2]

{\displaystyle (n+1)^{k+1}-1=\sum _{m=1}^{n}\left((m+1)^{k+1}-m^{k+1}\right)}

  {\displaystyle =\sum _{p=0}^{k}{\binom {k+1}{p}}(1^{p}+2^{p}+\dots +n^{p})}.

This in particular yields the examples below, e.g., take k = 1 to get the first example.

History

Faulhaber’s formula is also called Bernoulli’s formula. Faulhaber did not know the properties of the coefficients discovered by Bernoulli. Rather, he knew at least the first 17 cases, as well as the existence of the Faulhaber polynomials for odd powers described above.[3]

A rigorous proof of these formulas and his assertion that such formulas would exist for all odd powers took until Carl Jacobi (1834).

 

來趟生成函數應用之旅吧。

【註︰推導練習】

{(n+1)}^{k+1} - 1 = (2^{k+1} - 1) + (3^{k+1} - 2^{k+1}) + ( \cdots ) + \left( {(n+1)}^{k+1} - n^{k+1} \right)

= \sum \limits_{m=1}^{n} \left( {(m+1)}^{k+1} - m^{k+1} \right)

依據二項式定理

{(m+1)}^{k+1} - m^{k+1} = \sum \limits_{p=0}^{k+1} \left( \begin{array}{ccc} k+1 \\ p \end{array} \right) m^p - m^{k+1}

= \sum \limits_{p=0}^{k} \left( \begin{array}{ccc} k+1 \\ p \end{array} \right) m^p

\therefore {(n+1)}^{k+1} - 1 = \sum \limits_{m=1}^{n} \sum \limits_{p=0}^{k} \left( \begin{array}{ccc} k+1 \\ p \end{array} \right) m^p

= \sum \limits_{p=0}^{k} \sum \limits_{m=1}^{n} \left( \begin{array}{ccc} k+1 \\ p \end{array} \right) m^p

= \sum \limits_{p=0}^{k} \left( \begin{array}{ccc} k+1 \\ p \end{array} \right) (1^p + 2^p + \cdots + n^p)

 

 

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰無限大等級 V

若問一個人從山腳走到山頂,是否他經過了此山的所有高度?可能有人會說題意模糊難於論斷。也可能有人依據常理常義衡量講當然如此。想那山道蜒是『連續』的,人腳步伐落處卻是『離散』的 ,固然『連續』包含著『離散』!!然而『離散』能近似『連續』耶??所謂天衣無縫,實數方才完備乎!!

Completeness of the real numbers

Intuitively, completeness implies that there are not any “gaps” (in Dedekind’s terminology) or “missing points” in the real number line. This contrasts with the rational numbers, whose corresponding number line has a “gap” at each irrational value. In the decimal number system, completeness is equivalent to the statement that any infinite string of decimal digits is actually a decimal representation for some real number.

Depending on the construction of the real numbers used, completeness may take the form of an axiom (the completeness axiom), or may be a theorem proven from the construction. There are many equivalent forms of completeness, the most prominent being Dedekind completeness and Cauchy completeness (completeness as a metric space).

 

因其完備,對『連續函數』而言,介值定理不得不然嗎!!??

Intermediate value theorem

In mathematical analysis, the intermediate value theorem states that if a continuous function, f, with an interval, [a, b], as its domain, takes values f(a) and f(b) at each end of the interval, then it also takes any value between f(a) and f(b) at some point within the interval.

This has two important corollaries: 1) If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano’s theorem).[1] 2) The image of a continuous function over an interval is itself an interval.

Intermediate value theorem: Let f be a defined continuous function on [a, b] and let s be a number with f(a) < s < f(b). Then there exists at least one x with f(x) = s

 

奈何證明比定理更難了解呢??!!

Proof

The theorem may be proved as a consequence of the completeness property of the real numbers as follows:[2]

We shall prove the first case,  {\displaystyle f(a)<u<f(b)}. The second case is similar.

Let  S be the set of all  x\in [a,b] such that  {\displaystyle f(x)<u}. Then  S is non-empty since  a is an element of  S, and  S is bounded above by  b. Hence, by completeness, the supremum  {\displaystyle c=\sup S} exists. That is,  c is the lowest number that is greater than or equal to every member of  S. We claim that  {\displaystyle f(c)=u}.

Fix some  \varepsilon >0. Since  f is continuous, there is a  \delta >0 such that {\displaystyle {\Big |}f(x)-f(c){\Big |}<\varepsilon } whenever {\displaystyle |x-c|<\delta }. This means that

  {\displaystyle f(x)-\varepsilon <f(c)<f(x)+\varepsilon }

for all  {\displaystyle x\in (c-\delta ,c+\delta )}. By the properties of the supremum, there exist  {\displaystyle a^{*}\in (c-\delta ,c]} that is contained in S, so that for that  a^{*}

  {\displaystyle f(c)<f(a^{*})+\varepsilon \leq u+\varepsilon }.

Choose  {\displaystyle a^{**}\in [c,c+\delta )} that will obviously not be contained in  S, so we have

  {\displaystyle f(c)>f(a^{**})-\varepsilon \geq u-\varepsilon }.

Both inequalities

{\displaystyle u-\varepsilon <f(c)<u+\varepsilon }

are valid for all  \varepsilon >0, from which we deduce  {\displaystyle f(c)=u} as the only possible value, as stated.

The intermediate value theorem is an easy consequence of the basic properties of connected sets: the preservation of connectedness under continuous functions and the characterization of connected subsets of ℝ as intervals (see below for details and alternate proof). The latter characterization is ultimately a consequence of the least-upper-bound property of the real numbers.

The intermediate value theorem can also be proved using the methods of non-standard analysis, which places “intuitive” arguments involving infinitesimals on a rigorous footing. (See the article: non-standard calculus.)

 

且列直觀論證為比較︰

Intermediate value theorem

As another illustration of the power of Robinson‘s approach, we present a short proof of the intermediate value theorem (Bolzano’s theorem) using infinitesimals.

Let f be a continuous function on [a,b] such that f(a)<0 while f(b)>0. Then there exists a point c in [a,b] such that f(c)=0.

The proof proceeds as follows. Let N be an infinite hyperinteger. Consider a partition of [a,b] into N intervals of equal length, with partition points xi as i runs from 0 to N. Consider the collection I of indices such that f(xi)>0. Let i0 be the least element in I (such an element exists by the transfer principle, as I is a hyperfinite set). Then the real number

c={\mathrm {st}}(x_{{i_{0}}})

is the desired zero of f. Such a proof reduces the quantifier complexity of a standard proof of the IVT.

 

條條大路通羅馬,讀者自己判斷哩。

註︰

XDF-scale

Universe_expansion2

300px-Hubble_01

History_of_the_Universe.svg

Carl_Friedrich_Gauss

數學王子

200px-Floor_function.svg

200px-Ceiling_function.svg

Gamma-area.svg

歐拉 \gamma 推導

220px-Int_function.svg

軟體語言中的 INT(x) 函數

hyperinteger

Indeterminate_form_-_0x

0^0 未定式 \lim \limits_{x \to0^+} 0^x = 0

Indeterminate_form_-_x_over_x3

0/0 未定式 \lim \limits_{x \to 0} \frac{x}{x^3} = \infty

Indeterminate_form_-_sin_x_over_x_close

0/0 未定式 \lim \limits_{x \to 0} \frac{\sin(x)}{x} = 1

莊子‧雜篇‧天下

惠施多方,其書五車,其道舛駁,其言也不中。厤物之意,曰:『至大無外,謂之大一﹔至小無內,謂之小一。無厚,不可積也,其大千里。天與地卑,山與澤平。日方中方睨,物方生方死。大同而與小同異,此之謂【小同異】﹔萬物畢同畢異,此之謂【大同異】。南方無窮而有窮。今日適越而昔來。連環可解也。我知天之中央,燕之北、越之南是也。泛愛萬物,天地一體也 。』……

一九九零年發射的『哈伯太空望遠鏡』 HST Hubble Space Telescope 是以美國著名的天文學家『愛德溫‧鮑威爾‧哈伯』 Edwin Powell Hubble 為名,是一架在地球軌道上的望遠鏡。由於它位於地球大氣層之上,因此獲得了地上望遠鏡所沒有的好處:影像不受大氣湍流的影響、視相度極好,更無大氣散射造成的背景光干擾,甚至能觀測會被臭氧層吸收的紫外線。哈伯太空望遠鏡』彌補了地面觀測的不足,幫助天文學家『理解』和『解答』了許多天文學上的『基本問題』,使得人類對『宇宙緣起』有了更深的『認識』。

約翰‧卡爾‧弗里德里希‧高斯』 Johann Karl Friedrich Gauß 【Gauss】是德國著名數學家、物理學家、天文學家和大地測量學家,生於布倫瑞克,卒於哥廷根。高斯被認為是歷史上最重要的數學家之一,而且有『數學王子』的美譽。一八零八年,在高斯的數學巨著《算術研究》 Disquisitiones Arithmeticae 首度出現了一個形式符號 [x] 它表示等於或小於實數 x 的『最大整數』,也就是說 x - 1 <  [x] \leq x。今天這個『高斯符號』又稱之為『底函數』  floor function floor(x) = \lfloor x\rfloor ,與另一『頂函數』 ── 是指比實數 x 大的『最小整數 ── ceiling functions ceiling(x) = \lceil x \rceil 成為一對,經常出現於『數學』和『計算機科學』之中。這個『高斯符號』有什麼重要的嗎?通常一個好的『符號』能使人清晰『表達』複雜和困難的『概念』,而且讓人容易『理解』所說的『內容』,因此是十分重要的啊!!

舉例來說,歐拉研究過『調和級數』 harmonic series  \sum \limits_{k=1}^n \frac{1}{k} 和『自然對數』 natural logarithm  \ln(a)=\int_1^a \frac{1}{x}\,dx 之間的關係,雖然這兩者都是『發散的』 ──  值為無限大 \infty ── 它們的『差值』卻是一個叫做『歐拉-馬歇羅尼常數』的 \gamma 值。它可以定義如下
\gamma = \lim \limits_{n \rightarrow \infty } \left( \sum \limits_{k=1}^n \frac{1}{k} - \ln(n) \right)
=\int_1^\infty\left({1\over\lfloor x\rfloor}-{1\over x}\right)\,dx.
,計算後得到
\gamma = \sum \limits_{k=2}^\infty (-1)^k \frac{ \left \lfloor \log_2 k \right \rfloor}{k}
= \tfrac12-\tfrac13 + 2\left(\tfrac14 - \tfrac15 + \tfrac16 - \tfrac17\right) + 3\left(\tfrac18 - \dots - \tfrac1{15}\right) + \dots
= 0.57721 56649 \cdots

。 對一個不是『整數』的實數 x,『高斯函數』也可以表示為

\lfloor x\rfloor = x - \frac{1}{2} + \frac{1}{\pi} \sum \limits_{k=1}^\infty \frac{\sin(2 \pi k x)}{k}

。因此說『超實數系』裡也有『超整數』 hyperinteger 這就一點也不奇怪了吧!如果只從『形式定義』上講,一個『超整數』就是一個『超實數』的『整數部份』,也就是說

[r^{*}] = [ st(r + \delta x)] = [r]

, 可能沒有什麼意思。假使設想『超實數系』既有『無窮小\delta x,那它的『倒數』 reciprocal \frac{1}{\delta x} 就是『無限大』,也可以叫做『巨量』 Huge,一般用 H 表示。如果說『某數K 是個『巨量』,就是講 \frac{1}{K} 是『無窮小』數 \epsilon,這樣 \frac{\delta x}{\epsilon}\frac{H}{K}H \cdot \epsilonH - K 又是些什麼樣的數呢?它們被稱作『未定式』 Indeterminate form,因為假使不知道它們的『來歷』,我們並不能『確定』最終的『運算結果』 ── 是無窮小、有限量或是無限大 ── 。純就『形式』上講,它門是 \frac{0}{0}\frac{\infty}{\infty}\infty \cdot 0\infty - \infty 的計算,然而這可在『代數運算』是不被允許的啊!但是如果x > 0,那麼不管說 x 多大多小 0^{x} = 0,因此即使 x 是『正無窮小』數 \delta x,也應該得到 0^{\delta x} = 0 的『極限結果』的吧!同樣的 \frac{\delta x}{{\delta x}^3} = \frac{1}{{\delta x}^2} 是『趨近』於『無限大』的啊!也就是說『無窮小』與『無限大』也是有『等級』 Order 的,如果忽略了『這件事』,隨便混談『至大』和『至小』,大概就是『非量』與『非非量』的『迷惑』之所從來的了!!

─── 摘自《【Sonic π】電路學之補充《四》無窮小算術‧中上

 

縱想事不贅述,理無虛發︰

Mean value theorems for definite integrals

First mean value theorem for definite integrals

Let f : [a, b] → R be a continuous function. Then there exists c in (a, b) such that

  \int _{a}^{b}f(x)\,dx=f(c)(b-a).

Since the mean value of f on [a, b] is defined as

  {\frac {1}{b-a}}\int _{a}^{b}f(x)\,dx,

we can interpret the conclusion as f achieves its mean value at some c in (a, b).[5]

In general, if f : [a, b] → R is continuous and g is an integrable function that does not change sign on [a, b], then there exists c in (a, b) such that

\int _{a}^{b}f(x)g(x)\,dx=f(c)\int _{a}^{b}g(x)\,dx.

Proof of the first mean value theorem for definite integrals

Suppose f : [a, b] → R is continuous and g is a nonnegative integrable function on [a, b]. By the extreme value theorem, there exists m and M such that for each x in [a, b],  {\displaystyle m\leqslant f(x)\leqslant M} and  {\displaystyle f[a,b]=[m,M]}. Since g is nonnegative,

{\displaystyle m\int _{a}^{b}g(x)\,dx\leqslant \int _{a}^{b}f(x)g(x)\,dx\leqslant M\int _{a}^{b}g(x)\,dx.}

Now let

I=\int _{a}^{b}g(x)\,dx.

If  I=0, we’re done since

  {\displaystyle 0\leqslant \int _{a}^{b}f(x)g(x)\,dx\leqslant 0}

means

\int _{a}^{b}f(x)g(x)\,dx=0,

so for any c in (a, b),

{\displaystyle \int _{a}^{b}f(x)g(x)\,dx=f(c)I=0.}

If I ≠ 0, then

{\displaystyle m\leqslant {\frac {1}{I}}\int _{a}^{b}f(x)g(x)\,dx\leqslant M.}

By the intermediate value theorem, f attains every value of the interval [m, M], so for some c in [a, b]

f(c)={\frac {1}{I}}\int _{a}^{b}f(x)g(x)\,dx,

that is,

\int _{a}^{b}f(x)g(x)\,dx=f(c)\int _{a}^{b}g(x)\,dx.

Finally, if g is negative on [a, b], then

{\displaystyle M\int _{a}^{b}g(x)\,dx\leqslant \int _{a}^{b}f(x)g(x)\,dx\leqslant m\int _{a}^{b}g(x)\,dx,}

and we still get the same result as above.

QED

 

均值定理又現身。

知其議論依何據︰

積分判別法

Integral Test.svg

通過將調和級數的和與一個瑕積分作比較可證此級數發散。考慮右圖中長方形的排列。每個長方形寬1個單位、高1 / n個單位(換句話說,每個長方形的面積都是1 / n),所以所有長方形的總面積就是調和級數的和: 矩形面積和: = 1 \,+\, \frac{1}{2} \,+\, \frac{1}{3} \,+\, \frac{1}{4} \,+\, \frac{1}{5} \,+\, \cdots. 而曲線y = 1 / x以下、從1到正無窮部分的面積由以下瑕積分給出: 曲線下面積:  = \int_1^\infty\frac{1}{x}\,dx \;=\; \infty. 由於這一部分面積真包含於(換言之 ,小於)長方形總面積,長方形的總面積也必定趨於無窮。更準確地說,這證明了:

\sum _{{n=1}}^{k}\,{\frac {1}{n}}\;>\;\int _{1}^{{k+1}}{\frac {1}{x}}\,dx\;=\;\ln(k+1).

這個方法的拓展即積分判別法

 

補足推導數理成☆

假設 H_n = \sum \limits_{k=1}^{n} \frac{1}{k} ,依積分第一均值之定理

\frac{1}{k+1} < \int_{k}^{k+1} \frac{1}{x} dx = \ln(k+1) - \ln(k) < \frac{1}{k} ,那麼

\frac{1}{2} < \ln(2) - \ln(1) < 1

\frac{1}{3} <  < \ln(3) - ln(2) < \frac{1}{2}

\cdots

\frac{1}{n+1}< \ln(n+1) - \ln(n) < \frac{1}{n}

\therefore H_{n+1} - 1 < \ln(n+1) < H_n

 

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰無限大等級 IV

綻放的量天尺花,攝於夏威夷縣Kona

 

天有造父變星作標尺,地生量天尺花耐乾旱。

人創符號序無窮論次第︰

Family of Bachmann–Landau notations

Notation Name[12] Description Formal Definition Limit Definition[16][17][18][12][10]
f(n)=o(g(n)) Small O; Small Oh f is dominated by  g asymptotically   \forall k>0\;\exists n_{0}\;\forall n>n_{0}\;|f(n)|\leq k\cdot |g(n)| {\displaystyle \lim _{n\to \infty }{\frac {f(n)}{g(n)}}=0}
  f(n)=O(g(n)) Big O; Big Oh; Big Omicron   |f| is bounded above by  g (up to constant factor) asymptotically {\displaystyle \exists k>0\;\exists n_{0}\;\forall n>n_{0}\;|f(n)|\leq k\cdot g(n)} {\displaystyle \limsup _{n\to \infty }{\frac {\left|f(n)\right|}{g(n)}}<\infty }
f(n)=\Theta (g(n)) Big Theta   f is bounded both above and below by  g asymptotically   \exists k_{1}>0\;\exists k_{2}>0\;\exists n_{0}\;\forall n>n_{0}  k_{1}\cdot g(n)\leq f(n)\leq k_{2}\cdot g(n) f(n)=O(g(n)) and f(n)=\Omega (g(n)) (Knuth version)
f(n)\sim g(n)\! On the order of   f is equal to  g asymptotically \forall \varepsilon >0\;\exists n_{0}\;\forall n>n_{0}\;\left|{f(n) \over g(n)}-1\right|<\varepsilon {\displaystyle \lim _{n\to \infty }{f(n) \over g(n)}=1}
f(n)=\Omega (g(n)) Big Omega in number theory (Hardy-Littlewood)   |f| is not dominated by  g asymptotically   {\displaystyle \exists k>0\;\forall n_{0}\;\exists n>n_{0}\;|f(n)|\geq k\cdot g(n)} {\displaystyle \limsup _{n\to \infty }\left|{\frac {f(n)}{g(n)}}\right|>0}
f(n)=\Omega (g(n)) Big Omega in complexity theory (Knuth)   f is bounded below by  g asymptotically \exists k>0\;\exists n_{0}\;\forall n>n_{0}\;f(n)\geq k\cdot g(n) {\displaystyle \liminf _{n\to \infty }{\frac {f(n)}{g(n)}}>0}
f(n)=\omega (g(n)) Small Omega   f dominates  g asymptotically \forall k>0\;\exists n_{0}\;\forall n>n_{0}\ |f(n)|\geq k\cdot |g(n)| {\displaystyle \lim _{n\to \infty }\left|{\frac {f(n)}{g(n)}}\right|=\infty }

The limit definitions assume  {\displaystyle g(n)\neq 0} for sufficiently large  n. The table is sorted from smallest to largest, in the sense that o, O, Θ, ∼, both versions of Ω, ω on functions correspond to <, ≤, ≈, =, ≥, > on the real line.[18]:6

Computer science uses the big O, Big Theta Θ, little o, little omega ω and Knuth’s big Omega Ω notations.[19] Analytic number theory often uses the big O, small o, Hardy-Littlewood’s big Omega Ω and  \sim notations.[14] The small omega ω notation is not used as often in analysis.[20]

 

歷史點滴品味嚐︰

History (Bachmann–Landau, Hardy, and Vinogradov notations)

The symbol O was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie (“analytic number theory“), the first volume of which (not yet containing big O notation) was published in 1892.[1] The number theorist Edmund Landau adopted it, and was thus inspired to introduce in 1909 the notation o;[2] hence both are now called Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis.[24] The big O was popularized in computer science by Donald Knuth, who re-introduced the related Omega and Theta notations.[12] Knuth also noted that the Omega notation had been introduced by Hardy and Littlewood[10] under a different meaning “≠o” (i.e. “is not an o of”), and proposed the above definition. Hardy and Littlewood’s original definition (which was also used in one paper by Landau[13]) is still used in number theory (where Knuth’s definition is never used). In fact, Landau also used in 1924, in the paper just mentioned, the symbols  \Omega _{R} (“right”) and  \Omega _{L} (“left”), which were introduced in 1918 by Hardy and Littlewood,[11] and which were precursors for the modern symbols  \Omega _{+} (“is not smaller than a small o of”) and  \Omega _{-} (“is not larger than a small o of”). Thus the Omega symbols (with their original meanings) are sometimes also referred to as “Landau symbols”.

Also, Landau never used the Big Theta and small omega symbols.

Hardy’s symbols were (in terms of the modern O notation)

{\displaystyle f\preccurlyeq g\iff f\in O(g)}   and    f\prec g\iff f\in o(g);

(Hardy however never defined or used the notation  \prec \!\!\prec , nor  \ll , as it has been sometimes reported). It should also be noted that Hardy introduces the symbols  \preccurlyeq and  \prec (as well as some other symbols) in his 1910 tract “Orders of Infinity”, and makes use of it only in three papers (1910–1913). In his nearly 400 remaining papers and books he consistently uses the Landau symbols O and o.

Hardy’s notation is not used anymore. On the other hand, in the 1930s,[25] the Russian number theorist Ivan Matveyevich Vinogradov introduced his notation  \ll , which has been increasingly used in number theory instead of the  O notation. We have

f\ll g\iff f\in O(g),

and frequently both notations are used in the same paper.

The big-O originally stands for “order of” (“Ordnung”, Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it “Omicron”. The symbol was much later on (1976) viewed by Knuth as a capital omicron,[12] probably in reference to his definition of the symbol Omega. The digit zero should not be used.

 

基本尺度細思量︰

\cdots \prec \ln(\ln(x)) \prec x^{\frac{1}{n}} \prec x \prec x^n \prec e^x \prec e^{e^x} \prec \cdots

此處 n 是大於一的自然數。

指數對數總其綱☆

倘知階乘座何處??

Rate of growth and approximations for large n

 Plot of the natural logarithm of the factorial

As n grows, the factorial n! increases faster than all polynomials and exponential functions (but slower than double exponential functions) in n.

Most approximations for n! are based on approximating its natural logarithm

{\displaystyle \ln n!=\sum _{x=1}^{n}\ln x.}

The graph of the function f(n) = ln n! is shown in the figure on the right. It looks approximately linear for all reasonable values of n, but this intuition is false. We get one of the simplest approximations for ln n! by bounding the sum with an integral from above and below as follows:

{\displaystyle \int _{1}^{n}\ln x\,dx\leq \sum _{x=1}^{n}\ln x\leq \int _{0}^{n}\ln(x+1)\,dx}

which gives us the estimate

{\displaystyle n\ln \left({\frac {n}{e}}\right)+1\leq \ln n!\leq (n+1)\ln \left({\frac {n+1}{e}}\right)+1.}

Hence ln ⁡ {\displaystyle \ln n!\sim n\ln n} (see Big O notation). This result plays a key role in the analysis of the computational complexity of sorting algorithms (see comparison sort). From the bounds on ln n! deduced above we get that

e\left({\frac {n}{e}}\right)^{n}\leq n!\leq e\left({\frac {n+1}{e}}\right)^{n+1}.

It is sometimes practical to use weaker but simpler estimates. Using the above formula it is easily shown that for all n we have  (n/3)^{n}<n!, and for all n ≥ 6 we have  n!<(n/2)^{n}.

For large n we get a better estimate for the number n! using Stirling’s approximation:

  n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.

This in fact comes from an asymptotic series for the logarithm, and n factorial lies between this and the next approximation:

  {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.

Another approximation for ln n! is given by Srinivasa Ramanujan (Ramanujan 1988)

{\displaystyle \ln n!\approx n\ln n-n+{\frac {\ln(n(1+4n(1+2n)))}{6}}+{\frac {\ln(\pi )}{2}}}

or

n!\approx {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}[1+1/(2n)+1/(8n^{2})]^{1/6}.

Both this and  {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}} give a relative error on the order of 1/n3, but Ramanujan’s is about four times more accurate. However, if we use two correction terms (as in Ramanujan’s approximation) the relative error will be of order 1/n5:

n!\approx {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\exp \left({{\frac {1}{12n}}-{\frac {1}{360n^{3}}}}\right)

【註︰ x^x = e^{x \ln(x)}  \prec e^{x^2}}

 

漸近之理已顯揚!!

一圖

 

一表

Orders of common functions

Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a positive constant and n increases without bound. The slower-growing functions are generally listed first.

Notation Name Example
  O(1) constant Determining if a binary number is even or odd; Calculating  (-1)^{n}; Using a constant-size lookup table
  O(\log \log n) double logarithmic Number of comparisons spent finding an item using interpolation search in a sorted array of uniformly distributed values
  O(\log n) logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap
  {\displaystyle O((\log n)^{c})}
{\displaystyle \scriptstyle c>1}
polylogarithmic Matrix chain ordering can be solved in polylogarithmic time on a Parallel Random Access Machine.
  O(n^{c})
{\displaystyle \scriptstyle 0<c<1}
fractional power Searching in a kd-tree
  O(n) linear Finding an item in an unsorted list or in an unsorted array; adding two n-bit integers by ripple carry
  {\displaystyle O(n\log ^{*}n)} n log-star n Performing triangulation of a simple polygon using Seidel’s algorithm, or the union–find algorithm. Note that \log ^{*}(n)={\begin{cases}0,&{\text{if }}n\leq 1\\1+\log ^{*}(\log n),&{\text{if }}n>1\end{cases}}
{\displaystyle O(n\log n)=O(\log n!)} linearithmic, loglinear, or quasilinear Performing a fast Fourier transform; Fastest possible comparison sort; heapsort and merge sort
  O(n^{2}) quadratic Multiplying two n-digit numbers by a simple algorithm; simple sorting algorithms, such as bubble sort, selection sort and insertion sort; bound on some usually faster sorting algorithms such as quicksort, Shellsort, and tree sort
  O(n^{c}) polynomial or algebraic Tree-adjoining grammar parsing; maximum matching for bipartite graphs; finding the determinant with LU decomposition
{\displaystyle L_{n}[\alpha ,c]=e^{(c+o(1))(\ln n)^{\alpha }(\ln \ln n)^{1-\alpha }}}
{\displaystyle \scriptstyle 0<\alpha <1}
L-notation or sub-exponential Factoring a number using the quadratic sieve or number field sieve
O(c^{n})
{\displaystyle \scriptstyle c>1}
exponential Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search
O(n!) factorial Solving the travelling salesman problem via brute-force search; generating all unrestricted permutations of a poset; finding the determinant with Laplace expansion; enumerating all partitions of a set

The statement {\displaystyle f(n)=O(n!)} is sometimes weakened to f(n)=O\left(n^{n}\right) to derive simpler formulas for asymptotic complexity. For any  k>0 and  c>0 O(n^{c}(\log n)^{k}) is a subset of  O(n^{c+\varepsilon }) for any  \varepsilon >0, so may be considered as a polynomial with some bigger order.

 

義自彰!!??

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰無限大等級 III

無限大 \infty 是『符號』,它不是一個『數』!因此擴展實數線

Extended real number line

 In mathematics, the affinely extended real number system is obtained from the real number system by adding two elements: + ∞ and – ∞ (read as positive infinity and negative infinity respectively). These new elements are not real numbers. It is useful in describing various limiting behaviors in calculus and mathematical analysis, especially in the theory of measure and integration. The affinely extended real number system is denoted  \overline{\mathbb{R}} or [–∞, +∞] or ℝ ∪ {–∞, +∞}.When the meaning is clear from context, the symbol +∞ is often written simply as .

 

才如是描述它。同樣的無窮小 \delta \epsilon 不過是個『極限』接近『零』 0 之『概念』,它也不是一個『數』!!然而『零』是個『數』,一個滿足 0 < | \delta \epsilon | 的『量』,無有疑議焉? \lim \limits_{x \to \infty} 0 \cdot x^n = \ 0 \ ?? \ , n >0 。若是考慮 x^{-m} , \ m >0 ,當 x \to \infty 時,就說它叫 m 冪無窮小吧 ,則有

\lim \limits_{x \to \infty} \frac{x^n}{x^m} = \infty , \ if  \ m < n ,

\lim \limits_{x \to \infty} \frac{x^n}{x^m} = 1 , \ if  \ m = n ,

\lim \limits_{x \to \infty} \frac{x^n}{x^m} = 0 , \ if  \ m > n .

。那麼『零』到底是哪種等級的無窮小呢??!!假使可以構造 \frac{1}{{\infty}^{\infty}} 這種無窮小,比方說 \lim \limits_{x \to \infty} \frac{1}{x^x} ,那麼也能夠產生 { \left( x^x \right) }^{x}  , \ x \to \infty 這樣無限大啊!!??或將能了 0^0 引發的論辯耶︰

Zero to the power of zero

Discrete exponents

There are many widely used formulas having terms involving natural-number exponents that require 00 to be evaluated to 1. For example, regarding b0 as an empty product assigns it the value 1, even when b = 0. Alternatively, the combinatorial interpretation of b0 is the number of empty tuples of elements from a set with b elements; there is exactly one empty tuple, even if b = 0. Equivalently, the set-theoretic interpretation of 00 is the number of functions from the empty set to the empty set; there is exactly one such function, the empty function.[21]

Polynomials and power series

Likewise, when working with polynomials, it is often necessary to assign  0^{0} the value 1. A polynomial is an expression of the form  a_{0}x^{0}+\cdots +a_{n}x^{n} where x is an indeterminate, and the coefficients  a_{n} are real numbers (or, more generally, elements of some ring). The set of all real polynomials in x is denoted by  \mathbb {R} [x]. Polynomials are added termwise, and multiplied by the applying the usual rules for exponents in the indeterminate x (see Cauchy product). With these algebraic rules for manipulation, polynomials form a polynomial ring. The polynomial  x^{0} is the identity element of the polynomial ring, meaning that it is the (unique) element such that the product of  x^{0} with any polynomial  p(x) is just  p(x).[22] Polynomials can be evaluated by specializing the indeterminate x to be a real number. More precisely, for any given real number  x_{0} there is a unique unital ring homomorphism  \operatorname {ev} _{x_{0}}:\mathbb {R} [x]\to \mathbb {R} such that  \operatorname {ev} _{x_{0}}(x^{1})=x_{0}.[23] This is called the evaluation homomorphism. Because it is a unital homomorphism, we have \operatorname {ev} _{x_{0}}(x^{0})=1. That is, x^{0}=1 for all specializations of x to a real number (including zero).

This perspective is significant for many polynomial identities appearing in combinatorics. For example, the binomial theorem  (1+x)^{n}=\sum _{k=0}^{n}{\binom {n}{k}}x^{k} is not valid for x = 0 unless 00 = 1.[24] Similarly, rings of power series require x^{0}=1 to be true for all specializations of x. Thus identities like {\frac {1}{1-x}}=\sum _{n=0}^{\infty }x^{n} and  e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}} are only true as functional identities (including at x = 0) if 00 = 1.

In differential calculus, the power rule {\frac {d}{dx}}x^{n}=nx^{n-1} is not valid for n = 1 at x = 0 unless 00 = 1.

Continuous exponents

Plot of z = xy. The red curves (with z constant) yield different limits as (x, y) approaches (0, 0). The green curves (of finite constant slope, y = ax) all yield a limit of 1.

Limits involving algebraic operations can often be evaluated by replacing subexpressions by their limits; if the resulting expression does not determine the original limit, the expression is known as an indeterminate form.[25] In fact, when f(t) and g(t) are real-valued functions both approaching 0 (as t approaches a real number or ±∞), with f(t) > 0, the function f(t)g(t) need not approach 1; depending on f and g, the limit of f(t)g(t) can be any nonnegative real number or +∞, or it can diverge. For example, the functions below are of the form f(t)g(t) with f(t), g(t) → 0 as t → 0+, but the limits are different:

\lim _{t\to 0^{+}}{t}^{t}=1,\quad \lim _{t\to 0^{+}}\left(e^{-{\frac {1}{t^{2}}}}\right)^{t}=0,\quad \lim _{t\to 0^{+}}\left(e^{-{\frac {1}{t^{2}}}}\right)^{-t}=+\infty ,\quad \lim _{t\to 0^{+}}\left(e^{-{\frac {1}{t}}}\right)^{at}=e^{-a}.

Thus, the two-variable function xy, though continuous on the set {(x, y) : x > 0}, cannot be extended to a continuous function on any set containing (0, 0), no matter how one chooses to define 00.[26] However, under certain conditions, such as when f and g are both analytic functions and f is positive on the open interval (0, b) for some positive b, the limit approaching from the right is always 1.[27][28][29]

Complex exponents

In the complex domain, the function zw may be defined for nonzero z by choosing a branch of log z and defining zw as ew log z. This does not define 0w since there is no branch of log z defined at z = 0, let alone in a neighborhood of 0.[30][31][32]

History of differing points of view

The debate over the definition of  0^{0} has been going on at least since the early 19th century. At that time, most mathematicians agreed that  0^{0}=1, until in 1821 Cauchy[33] listed  0^{0} along with expressions like  {\frac {0}{0}} in a table of indeterminate forms. In the 1830s Libri[34][35] published an unconvincing argument for  0^{0}=1, and Möbius[36] sided with him, erroneously claiming that  \scriptstyle \lim _{t\to 0^{+}}f(t)^{g(t)}\;=\;1 whenever  \scriptstyle \lim _{t\to 0^{+}}f(t)\;=\;\lim _{t\to 0^{+}}g(t)\;=\;0. A commentator who signed his name simply as “S” provided the counterexample of  \scriptstyle (e^{-1/t})^{t}, and this quieted the debate for some time. More historical details can be found in Knuth (1992).[37]

More recent authors interpret the situation above in different ways:

  • Some argue that the best value for  0^{0} depends on context, and hence that defining it once and for all is problematic.[38] According to Benson (1999), “[t]he choice whether to define  0^{0} is based on convenience, not on correctness. If we refrain from defining  0^{0}, then certain assertions become unnecessarily awkward. […] The consensus is to use the definition 0^{0}=1, although there are textbooks that refrain from defining  0^{0}.”[39]
  • Others argue that  0^{0} should be defined as 1. Knuth (1992) contends strongly that  0^{0}has to be 1″, drawing a distinction between the value  0^{0}, which should equal 1 as advocated by Libri, and the limiting form  0^{0} (an abbreviation for a limit of  \scriptstyle f(x)^{g(x)} where  \scriptstyle f(x),g(x)\to 0), which is necessarily an indeterminate form as listed by Cauchy: “Both Cauchy and Libri were right, but Libri and his defenders did not understand why truth was on their side.”[37]

 

或將能曉已定或未定形式答覆乎︰

Expressions that are not indeterminate forms

The expression 1/0 is not commonly regarded as an indeterminate form because there is not an infinite range of values that f/g could approach. Specifically, if f approaches 1 and g approaches 0, then f and g may be chosen so that (1) f/g approaches +∞, (2) f/g approaches −∞, or (3) the limit fails to exist. In each case the absolute value |f/g| approaches +∞, and so the quotient f/g must diverge, in the sense of the extended real numbers. (In the framework of the projectively extended real line, the limit is the unsigned infinity ∞ in all three cases.) Similarly, any expression of the form a/0, with a ≠ 0 (including a = +∞ and a = −∞), is not an indeterminate form since a quotient giving rise to such an expression will always diverge.

The expression 0 is not an indeterminate form. The expression 0+∞ has the limiting value 0 for the given individual limits, and the expression 0−∞ is equivalent to 1/0.

 

涉及無限、無窮,總是費思量矣☆★

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰無限大等級 II

無限大 \infty 巨量也,比任何給定的正量都大。它的倒數 \frac{1}{\infty} 是無窮小也,故比任何給定的正量都小。如果無限大有等級,當然無窮小也有等級的了。過去作者曾經寫過一系列文本《【Sonic π】電路學之補充《四》無窮小算術‧上》介紹  Robinson 先生之直觀微積分︰

一九六零年,德國數學家『亞伯拉罕‧魯濱遜』 Abraham Robinson 將『萊布尼茲』的微分直觀落實。 用嚴謹的方法來定義和運算實數的『無窮小』與『無限大』,這就是數學史上著名的『非標準微積分』Non-standard calculus ,可說是『非標準分析』non-standard analysis 之父。

就像『複數C 是『實數系R 的『擴張』一樣,他將『實數系』增入了『無窮小』 infinitesimals 元素 \delta x ,魯濱遜創造出『超實數』 hyperreals r^{*} = r + \delta x,形成了『超實數系R^{*}。那這個『無窮小』是什麼樣的『』呢?對於『正無窮小』來說,任何給定的『正數』都比要它大,就『負無窮小』來講,它大於任何給定的『負數』。 『』也就自然的被看成『實數系』裡的『無窮小』的了。假使我們說兩個超實數 a, b, \ a \neq b 是『無限的鄰近』 indefinitly close,記作 a \approx b 是指 b -a \approx 0 是個『無窮小』量。在這個觀點下,『無窮小』量不滿足『實數』的『阿基米德性質』。也就是說,對於任意給定的 m 來講, m \cdot \delta x 為『無窮小』量;而 \frac{1}{\delta x} 是『無限大』量。然而在『系統』與『自然』的『擴張』下,『超實數』的『算術』符合所有一般『代數法則』。

hyperreals

Standard_part_function_with_two_continua.svg

220px-Sentido_geometrico_del_diferencial_de_una_funcion

速度里程表

有人把『超實數』想像成『數原子』,一個環繞著『無窮小』數的『實數』。就像『複數』有『實部R_e 與『虛部I_m 取值『運算』一樣,『超實數』也有一個取值『運算』叫做『標準部份函數』Standard part function

st(r^{*}) = st(r + \delta x)
= st(r) + st(\delta x) = r + 0 = r

。 如此一個『函數f(x)x_0 是『連續的』就可以表示成『如果 x \approx x_0, \ x \neq x_0,可以得到 f(x) \approx f(x_0)』。

假使 y = x^2,那麼 y 的『斜率』就可以這麼計算

\frac{dy}{dx} = st \left[ \frac{\Delta y}{\Delta x} \right] = st \left[ \frac{(x + \Delta x)^2 - x^2}{\Delta x} \right]
= st \left[2 x + \Delta x \right] = 2 x

。 彷彿在用著可以調整『放大倍率』的『顯微鏡』逐步『觀入』 zoom in 一個『函數』,隨著『解析度』的提高,函數之『曲率』逐漸減小,越來越『逼近』一條『直線』── 某點的切線 ── 的啊!!

同樣的『積分』就像是『里程表』的『累計』一樣,可以用

\forall 0 < \delta x \approx 0, \ \int_{a}^{b} dx \approx f(a)\delta x + f(a + \delta x)\delta x + \cdots + f(b - \delta x)\delta x

來表示的呀!!

─── 摘自《【Sonic π】電路學之補充《四》無窮小算術‧中

 

希望讀者能夠深入理解實數的基本性質,以及由之而來的各種抽象論證︰

1280px-Supremum_illustration

最小上界性質

Let S be a non-empty set of real numbers:
A real number x is called an upper bound for S if x ≥ s for all s ∈ S. A real number x is the least upper bound (or supremum) for S if x is an upper bound for S and x ≤ y for every upper bound y of S.

從『疊套區間』的觀點來看,一個『超實數r^{*} = r + \delta x 就可以表達成 [r - \delta \epsilon, r + \delta \epsilon],而且說 st \left( [r - \delta \epsilon, r + \delta \epsilon] \right) = \{r\},由於它只有『唯一的』一個元素,所以被稱作『單子集合』 singleton set。假使我們思考一個『單調上升有上界a_n, a_{n+1} > a_n, a_n < U, n=1 \cdots n 的『序列』,會發現它一定有『最小上界』。假設 H 是一個『巨量』,那麼 st \left( \bigcap \limits_{1}^{\infty} [a_n, 2 a_H -a_n] = a_{\infty} \right)

。這就是『實數』的『基本性質』,任何一有極限的『序列』收斂於一個『唯一』的『實數』,一般稱之為『實數』的『完備性』 completeness,由於我們是站在『超實數』的立場,選擇了『疊套區間』的觀點,加之以『無窮小』量不滿足『實數』的『阿基米德性質』,所以這個『實數』的『完備性』只是從『疊套區間』確定了一個『單子集合』 推導的結論。對比著來看,這一個『有理數』序列 x_0 = 1, x_{n+1} = \frac{x_n + \frac{2}{x_n}}{2} 的『極限x_{\infty} = \sqrt{2} ,它可從求解 x_H = \frac{x_H + \frac{2}{x_H}}{2} 得到,然而它並不是『有理數』,所以說『有理數』不具有『完備性』 。那麼對一個『非空有上界』的『集合S,也可以用『二分逼近法』論證如下︰

由 於 S 有上界,就說是 b_1 吧,因為 S 不是空集合,一定有一個元素 a_1 不是它的上界。這兩個序列可以遞迴的如此定義,計算 \frac{a_n + b_n}{2},如果它是 S 的上界,那麼 a_{n+1} = a_n, \ b_{n+1} = \frac{a_n + b_n}{2},否則 S 中必有一個元素 s,而且 s > \frac{a_n + b_n}{2},此時選擇 a_{n+1} = s, \ b_{n+1} = b_n,如此 a_1 \leq a_2 \leq a_3 \leq \cdots \leq b_3 \leq b_2 \leq b_1 而且 a_H - b_H \approx 0,所以一定存在一個 L = a_{\infty} = b_{\infty},此為 S 之最小上界。

同理 S 的補集 R - S 就會有『下界』,而且會有『最大下界』。因此我們將一個有『上界』與『下界』的集合,簡稱之為『有界集合』。

實數集合』的『最小上界』性質,可以用來證明『實數分析』上的多條定理,在此僅列舉幾條『常用的』︰

波爾查諾‧魏爾斯特拉斯定理】 任一實數 R 中的有界序列 (x_n)_{n\in\mathbb{N} 至少包含一個收斂的子序列。

讓我們從 x_n 中選擇元素,建構一個『峰值集合S = \{ a_k :  \forall n \geq k, \ a_k \geq x_n \},假使 S 的元素是『有限的』,就可將之大小『排序』,建立序列 (s_k: s_k \leq s_{k+1})。如果 S 的元素是『無限的』,我們依然可以用『下標p 遞增的方式從 S 中選擇建立『序列(a_p: a_p \leq a_{p+1}),這兩者都是『單調上升有界』的序列,所以必然會有『最小上界』。

極值定理如果實數函數 f 是閉區間 [a, b]上的『連續函數』,那麼它在其間一定會有『最大值』和『最小值』。也就是說,存在 c, d \in [a,b] 兩個『極值』使得 \forall x \in [a, b], \ f(c) \geq f(x) \geq f(d)

假設函數 f 沒有上界。那麼,根據實數的『阿基米德性質』,對於每一個自然數 n,都可以有一個 x_n \in [a, b],使得 f(x_n) > n,這就構成了一個『有界的序列x_n,然而依據『波爾查諾‧魏爾斯特拉斯定理』,這個 x_n 序列至少會有一個收斂的『子序列x_{n_k},就稱它的極限值是 x_H,此處 H 是『巨量』。因為 f 在閉區間 [a, b] 中『連續』,於是 f(x_H) 也是『有限量』,然而依據『假設f(x_H) > H \approx \infty,故而矛盾,所以實數函數 f 是有『上界的』。只需考慮 -f,從它有『上界』,就可以得到 f 一定有『下界』的吧!也就是說一個實數的『連續』函數,因其『連續性』將一個『定義域』的『閉區間』映射到『對應域』的『閉區間』,所以也必將『無窮小』閉區間 [x - \delta x, x + \delta x] 映射到『無窮小』閉區間 [f(x) -  \epsilon, f(x) + \epsilon] 的啊!!事實上,『無窮小』閉區間 [x - \delta x, x + \delta x] 可以看成 x 點的『鄰域』,難道說所謂的函數 fx 點『連續』,f 可能不在這個『無窮小鄰域』裡無限的『逼近f(x) 的嗎??

羅爾定理如果一個實數函數 f(x) 滿足

在閉區間 [a, b] 上『連續』;
在開區間 (a,b) 內『可微分』;
在區間端點處的函數值相等,即 f(a) = f(b)

那麼在開區間 (a,b) 之內至少有一點 x_s, \ a < x_s < b,使得 \frac{df(x)}{dx} (x_s) = f^\prime(x_s) = 0

根據『極值定理』 ,實數函數 f 在閉區間 [a, b] 裡有『極大值M 和『極小值m,如果它們都同時發生在『端點ab 處,由於 f(a) = f(b) 而且 m \leq f(x) \leq M, x \in (a, b),因此 f(x) = m = M 是一個『常數函數』,所以 \frac{df}{dx} = \frac{f(x + \delta x) - f(x)} {\delta x} = f^\prime(x) = 0。除此之外『極大值M 或『極小值m 之一只能發生在開區間 (a, b) 之內,假設於 \xi 處取得了『極大值M = f(\xi),因此 f(\xi - \delta x) \leq M,而且 f(\xi + \delta x) \leq M,由於 \delta x > 0\frac{df}{dx}(\xi) = \frac{f(\xi + \delta x) - f(\xi)} {\delta x} \leq 0,同時 -\delta x < 0\frac{df}{dx}(\xi) = \frac{f(\xi - \delta x) - f(\xi)} {-\delta x} \geq 0,再由於函數 f\xi 處『可微分』,所以 \frac{df}{dx}(\xi) = f^\prime(\xi) = 0。同理也可以證明 f 有『極小值m = f(\eta) 時,\frac{df}{dx}(\eta) = f^\prime(\eta) = 0。也可以講『羅爾定理』將『連續性』、『可微分性』與『極值』聯繫了起來,在此強調那個『可微分』的條件是『必要的』。舉個例子說,函數 y = | x |x = 0 處有『極小值』,考慮它的『無窮小鄰域[- \delta x, \delta x],右方逼近的『導數』是 \frac{| \delta x | - | 0 |}{\delta x} = 1,然而左方逼近的『導數』是 \frac{|- \delta x | - | 0 |}{- \delta x} = -1,因此這個函數於『此點』不可微分,此時當然『羅爾定理』也就不適用的了!!

均值定理一個實數函數 f 在閉區間 [a, b] 裡『連續』且於開區間 [a, b] 中『可微分』,那麼一定存在一點 c, \ a < c < b 使得此點的『切線斜率』等於兩端點間的『割線斜率』,即 f^{\prime}(c) = \frac{f(b) - f(a)}{b - a}

假使藉著 f(x) 定義一個函數 g(x) = f(x) - \frac{f(b) - f(a)}{b - a} x,這個 g(x) 函數在閉區間 [a, b] 裡『連續』且於開區間 [a, b] 中『可微分』,同時 g(a) = g(b) = \frac{b f(a) - a f(b)}{b - a},於是依據『羅爾定理』一定有一點 c 使得 g^{\prime}(c) = 0 = f^{\prime}(c) - \frac{f(b) - f(a)}{b - a},所以 f^{\prime}(c) = \frac{f(b) - f(a)}{b - a}

220px-Monotonicity_example2

Bolzano–Weierstrass theorem

220px-Monotonicity_example1

300px-Extreme_Value_Theorem.svg

極值定理

Archimedean property

阿基米德性質

hyperrealsystem

hyperinteger

200px-Límite_01.svg

300px-RTCalc.svg

Rolle’s theorem

300px-Sec2tan

220px-Sentido_geometrico_del_diferencial_de_una_funcion

300px-Semicircle.svg

Absolute_value.svg

300px-Mvt2.svg

均值定理

300px-拉格朗日中值定理

g(x) = f(x) - rx, \ r = \frac{f(b) - f(a)}{b - a}

─── 摘自《【Sonic π】電路學之補充《四》無窮小算術‧下

 

通熟者或只需告之定義 h(x) = f(x) - \frac{f(b) - f(a)}{g(b) - g(a)} g(x) ,知道 h(a) = h(b) = \frac{f(a) g(b) - f(b) g(a)}{g(b) - g(a)} ,就可以從『羅爾定理』證明

柯西均值定理

柯西均值定理,也叫拓展均值定理,是均值定理的一般形式。它敘述為:如果函數fg都在閉區間[a,b]上連續,且在開區間(a, b)上可導,那麼存在某個c ∈ (a,b),使得

柯西定理的幾何意義

(f(b)-f(a))g\,'(c)=(g(b)-g(a))f\,'(c).\,

當然,如果g(a) ≠ g(b)並且g′(c) ≠ 0,這等價於:

\frac{f'(c)}{g'(c)}=\frac{f(b)-f(a)}{g(b)-g(a)}\cdot

在幾何上,這表示曲線

\begin{array}{ccc}[a,b]&\longrightarrow&\mathbb{R}^2\\t&\mapsto&\bigl(f(t),g(t)\bigr),\end{array}

的圖像存在平行於由(f(a),g(a))和(f(b),g(b))確定的直線的切線。但柯西定理不能表明在任何情況下不同的兩點(f(a),g(a))和(f(b),g(b))都存在切線,因為可能存在一些c值使f′(c) = g′(c) = 0,換句話說取某個值時位於曲線的駐點;在這些點似乎曲線根本沒有切線。下面是這種情形的一個例子

t\mapsto(t^3,1-t^2),

在區間[−1,1]上,曲線由(−1,0)到(1,0),卻並無一個水平切線;然而它有一個駐點(實際上是一個尖點)在t = 0時。

柯西均值定理可以用來證明羅必達法則. (拉格朗日)均值定理是柯西均值定理當g(t) = t時的特殊情況。

 

的吧。如是者可否藉著『柯西均值定理』證明

羅必達法則

羅必達法則(l’Hôpital’s rule)是利用導數來計算具有不定型極限的方法。這法則是由瑞士數學家約翰·伯努利(Johann Bernoulli)所發現的,因此也被叫作伯努利法則(Bernoulli’s rule)。

敘述

羅畢達法則可以求出特定函數趨近於某數的極限值。令  {\displaystyle c\in {\bar {\mathbb {R} }}}擴展實數),兩函數  {\displaystyle f(x),g(x)}在以  x=c為端點的開區間可微, {\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}\in {\bar {\mathbb {R} }}},並且  {\displaystyle g'(x)\neq 0}

如果 {\displaystyle \lim _{x\to c}{f(x)}=\lim _{x\to c}{g(x)}=0} {\displaystyle \lim _{x\to c}{|f(x)|}=\lim _{x\to c}{|g(x)|}=\infty } 其中一者成立,則稱欲求的極限 {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}}未定式

此時羅必達法則表明:

  {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=\lim _{x\to c}{\frac {f'(x)}{g'(x)}}}

對於不符合上述分數形式的未定式,可以透過運算轉為分數形式,再以本法則求其值。以下列出數例:

欲求的極限 條件 轉換為分數形式的方法
(1)

{\displaystyle \lim _{x\to c}f(x)g(x)\!}

{\displaystyle \lim _{x\to c}f(x)=0,\ \lim _{x\to c}g(x)=\infty \!} {\displaystyle \lim _{x\to c}f(x)g(x)=\lim _{x\to c}{\frac {f(x)}{1/g(x)}}\!}
(2){\displaystyle \lim _{x\to c}(f(x)-g(x))\!} {\displaystyle \lim _{x\to c}f(x)=\infty ,\ \lim _{x\to c}g(x)=\infty \!} {\displaystyle \lim _{x\to c}(f(x)-g(x))=\lim _{x\to c}{\frac {1/g(x)-1/f(x)}{1/(f(x)g(x))}}\!}
(3)  {\displaystyle \lim _{x\to c}{f(x)}^{g(x)}\!}   {\displaystyle \lim _{x\to c}f(x)=0^{+},\lim _{x\to c}g(x)=0\!}
{\displaystyle \lim _{x\to c}f(x)=\infty ,\ \lim _{x\to c}g(x)=0\!}
  {\displaystyle \lim _{x\to c}f(x)^{g(x)}=\exp \lim _{x\to c}{\frac {g(x)}{1/\ln f(x)}}\!}
(4)  {\displaystyle \lim _{x\to c}{f(x)}^{g(x)}\!} {\displaystyle \lim _{x\to c}f(x)=1,\ \lim _{x\to c}g(x)=\infty \!} {\displaystyle \lim _{x\to c}f(x)^{g(x)}=\exp \lim _{x\to c}{\frac {\ln f(x)}{1/g(x)}}\!}

注意

不能在數列形式下直接用羅必達法則,因為對於離散變量是無法求導數的。但此時有形式類近的斯托爾茲-切薩羅定理 (Stolz-Cesàro theorem)作為替代。

 

乎??進而論證

斯托爾茲-切薩羅定理

斯托爾茲-切薩羅定理英語:Stolz–Cesàro theorem)是數學分析學中的一個用於證明數列收歛的定理。該定理以奧地利人奧托·施托爾茨義大利人恩納斯托·切薩羅命名。

內容

  (a_{n})_{{n\geq 1}} (b_{n})_{{n\geq 1}}為兩個實數數列。若  b_{n}嚴格單調無界正數數列,且有窮極限

  {\displaystyle \lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=\ell }

存在,則

  \lim _{{n\to \infty }}{\frac {a_{n}}{b_{n}}}\

也存在且等於

用法說明

該定理雖然主要被用來處理數列不定型極限[1][2],但該定理在沒有  {\displaystyle \lim _{n\to \infty }a_{n}=\infty }這一限制條件時也是成立的[2]。雖然該定理通常是以分母  b_{n}為正數數列的情形加以敘述的,但注意到該定理對分子  a_{n}的正負沒有限制,所以原則上把對數列  b_{n}的限制條件替換為「嚴格單調遞減且趨於負無窮大」也是沒有問題的。

洛必達法則的疊代用法類似,在嘗試應用斯托爾茲-切薩羅定理考察數列的極限時,如果發現兩個數列差分的商仍然是不定型,可以嘗試再使用1次該定理,考察其2階差分之商的極限。[2]

應當注意,當 {\displaystyle \lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}}不存在時,不能認定 {\displaystyle \lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}}必定也不存在。換句話說,確實有「有窮極限  {\displaystyle \lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}}存在,但有窮極限 {\displaystyle \lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}}不存在」的情況(詳見下文針對此逆命題所舉的反例)。

直觀解釋

利用與折線斜率的類比,該定理具有直觀的幾何意義。[2]

 

耶!!