時間序列︰生成函數‧漸近展開︰白努利 □○《九下》

方法能傳精神難,善用方法不簡單。

天生精神自己有,博學廣思勤鍛鍊。

 

Kline 教授說︰

s_{2n} = \sum \limits_{\nu=1}^{\infty} \frac{1}{{\nu}^{2 n}} = {(-1)}^{n-1} \frac{{(2 \pi)}^{2 n}}{2 (2n)!} B_{2 n}

此處 B_{2n} 為白努利數。

是歐拉最好的勝利。

作者不知歐拉如何證明得到,不敢亂說。一位知道『留數定理』者

Residue theorem

In complex analysis, the residue theorem, sometimes called Cauchy’s residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals as well. It generalizes the Cauchy integral theorem and Cauchy’s integral formula. From a geometrical perspective, it is a special case of the generalized Stokes’ theorem.

 

Illustration of the setting.

The statement is as follows:

Let U be a simply connected open subset of the complex plane containing a finite list of points a1, …, an, and f a function defined and holomorphic on U \ {a1,…,an}. Let γ be a closed rectifiable curve in U which does not meet any of the ak, and denote the winding number of γ around ak by I(γ, ak). The line integral of f around γ is equal to i times the sum of residues of f at the points, each counted as many times as γ winds around the point:

{\displaystyle \oint _{\gamma }f(z)\,dz=2\pi i\sum _{k=1}^{n}\operatorname {I} (\gamma ,a_{k})\operatorname {Res} (f,a_{k}).}

If γ is a positively oriented simple closed curve, I(γ, ak) = 1 if ak is in the interior of γ, and 0 if not, so

{\displaystyle \oint _{\gamma }f(z)\,dz=2\pi i\sum \operatorname {Res} (f,a_{k})}

with the sum over those ak inside γ.

The relationship of the residue theorem to Stokes’ theorem is given by the Jordan curve theorem. The general plane curve γ must first be reduced to a set of simple closed curves {γi} whose total is equivalent to γ for integration purposes; this reduces the problem to finding the integral of f dz along a Jordan curve γi with interior V. The requirement that f be holomorphic on U0 = U \ {ak} is equivalent to the statement that the exterior derivative d(f dz) = 0 on U0. Thus if two planar regions V and W of U enclose the same subset {aj} of {ak}, the regions V \ W and W \ V lie entirely in U0, and hence

{\displaystyle \int _{V\backslash W}d(f\,dz)-\int _{W\backslash V}d(f\,dz)}

is well-defined and equal to zero. Consequently, the contour integral of f dz along γj = ∂V is equal to the sum of a set of integrals along paths λj, each enclosing an arbitrarily small region around a single aj — the residues of f (up to the conventional factor i) at {aj}. Summing over {γj}, we recover the final expression of the contour integral in terms of the winding numbers {I(γ, ak)}.

In order to evaluate real integrals, the residue theorem is used in the following manner: the integrand is extended to the complex plane and its residues are computed (which is usually easy), and a part of the real axis is extended to a closed curve by attaching a half-circle in the upper or lower half-plane, forming a semicircle. The integral over this curve can then be computed using the residue theorem. Often, the half-circle part of the integral will tend towards zero as the radius of the half-circle grows, leaving only the real-axis part of the integral, the one we were originally interested in.

 

倒是可以藉著

\pi \cot(\pi z) = \lim \limits_{N \to \infty} \sum \limits_{n = -N}^{N} {(z-n)}^{-1}

= \frac{1}{z} + \sum \limits_{n = 1}^{\infty} \frac{2 z}{z^2 - n^2}

Example 2

The fact that π cot(πz) has simple poles with residue one at each integer can be used to compute the sum

{\displaystyle \displaystyle \sum _{n=-\infty }^{\infty }f(n).}

Consider, for example, f(z) = z−2. Let ΓN be the rectangle that is the boundary of [−N1/2, N + 1/2]2 with positive orientation, with an integer N. By the residue formula,

{\displaystyle {\frac {1}{2\pi i}}\int _{\Gamma _{N}}f(z)\pi \cot(\pi z)\,dz=\operatorname {Res} \limits _{z=0}+\sum _{n=-N \atop n\neq 0}^{N}n^{-2}}.

The left-hand side goes to zero as N → ∞ since the integrand has order O(N−2). On the other hand,[1]

{\displaystyle {\frac {z}{2}}\cot \left({\frac {z}{2}}\right)=1-B_{2}{\frac {z^{2}}{2!}}+\cdots ,\,B_{2}={\tfrac {1}{6}}.}

(In fact, z/2 cot(z/2) = iz/1 − eiziz/2.) Thus, the residue Resz = 0 is π2/3. We conclude:

  {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}}

which is a proof of the Basel problem.

The same trick can be used to establish

  {\displaystyle \pi \cot(\pi z)=\lim _{N\to \infty }\sum _{n=-N}^{N}(z-n)^{-1}}

that is, the Eisenstein series.

We take f(z) = (wz)−1 with w a non-integer and we shall show the above for w. The difficulty in this case is to show the vanishing of the contour integral at infinity. We have:

  {\displaystyle \int _{\Gamma _{N}}{\frac {\pi \cot(\pi z)}{z}}\,dz=0}

since the integrand is an even function and so the contributions from the contour in the left-half plane and the contour in the right cancel each other out. Thus,

{\displaystyle \int _{\Gamma _{N}}f(z)\pi \cot(\pi z)\,dz=\int _{\Gamma _{N}}\left({\frac {1}{w-z}}+{\frac {1}{z}}\right)\pi \cot(\pi z)\,dz}

goes to zero as N → ∞.

See the corresponding article in French Wikipedia for further examples.

 

與白努利數生成函數 G(t) = \frac{t}{e^t -1} = \sum \limits_{n=0}^{\infty} B_n \frac{t^n}{n!} 的『偶函數』部份

\frac{1}{2} ( G(t) + G(-t))

= \frac{t}{2} \cdot \frac{e^{t/2} + e^{- t/2}}{e^{t/2} - e^{- t/2}}

= \frac{1}{2}  t \coth(\frac{1}{2} t)

以及

\cot(z) = i \coth(iz)

得到兩種 \cot(z) 之『表達式』。

因此就『ㄡㄌㄚ˙』得知矣☆

 

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰白努利 □○《九中》

梁啟超手書李商隱無題詩

唐‧李商隱‧無題二首之一

昨夜星辰昨夜風, 畫樓西畔桂堂東。
身無彩鳳雙飛翼, 心有靈犀一點通。
隔座送鉤春酒暖, 分曹射覆蠟燈紅。
嗟余聽鼓應官去, 走馬蘭台類轉蓬。

之二

聞道閶門萼綠華,昔年相望抵天涯。
豈知一夜秦樓客,偷看吳王苑內花。

昨夜何必候風觀星辰?今宵誰人畫樓桂堂中?心有身無什麼幽情?靈犀彩鳳全得能成?送鉤射覆春夜遊戲?隔座分曹燈紅比鄰?此日方聽鼓走馬藍台耶??怎不嗟余應官類轉蓬乎!!

 

史學大師陳寅恪先生說以『詩文證史』難矣哉!試想如果字詞聲韻所知不多,欣賞都不容易,能夠訓詁證史乎?不過好詩詞讀來丁丁冬冬,意境優美彷彿意有所指,偶爾夢回聽鼓之際,一時恍惚忽而睹見,焉知不會心有靈犀一點通耶??

如是怎麼從已被陳述之公式 、定理、命題,還原想法、思路之歷史由來呢??!!所以講關聯之重要以及同理之難得矣!!??即使不知萊布尼茨與 π 故事的前後次序,只是並排列舉那些相關詞條

 Leibniz formula for π

In mathematics, the Leibniz formula for π, named after Gottfried Leibniz , states that

{\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,{\frac {1}{9}}\,-\,\cdots \,=\,{\frac {\pi }{4}}.}

It is also called Madhava–Leibniz series as it is a special case of a more general series expansion for the inverse tangent function, first discovered by the Indian mathematician Madhava of Sangamagrama in the 14th century. The series for the inverse tangent function, which is also known as Gregory’s series, can be given by:

  \arctan x=x-{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}-{\frac {x^{7}}{7}}+\cdots

The Leibniz formula for π/4 can be obtained by plugging x = 1 into the above inverse-tangent series.[1]

It also is the Dirichlet L-series of the non-principal Dirichlet character of modulus 4 evaluated at s = 1, and therefore the value β(1) of the Dirichlet beta function.

Proof

{\displaystyle {\begin{aligned}{\frac {\pi }{4}}&=\arctan(1)\\&=\int _{0}^{1}{\frac {1}{1+x^{2}}}\,dx\\[8pt]&=\int _{0}^{1}\left(\sum _{k=0}^{n}(-1)^{k}x^{2k}+{\frac {(-1)^{n+1}\,x^{2n+2}}{1+x^{2}}}\right)\,dx\\[8pt]&=\left(\sum _{k=0}^{n}{\frac {(-1)^{k}}{2k+1}}\right)+(-1)^{n+1}\left(\int _{0}^{1}{\frac {x^{2n+2}}{1+x^{2}}}\,dx\right).\end{aligned}}}

Considering only the integral in the last line, we have:

{\displaystyle 0<\int _{0}^{1}{\frac {x^{2n+2}}{1+x^{2}}}\,dx<\int _{0}^{1}x^{2n+2}\,dx={\frac {1}{2n+3}}\;\rightarrow 0{\text{ as }}n\rightarrow \infty .}

Therefore, by the squeeze theorem, as n → ∞ we are left with the Leibniz series:

{\displaystyle {\frac {\pi }{4}}=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{2k+1}}}

For a more detailed proof, together with the original geometric proof by Leibniz himself, see Leibniz’s Formula for Pi.[2]

Inverse trigonometric functions

Infinite series

Like the sine and cosine functions, the inverse trigonometric functions can be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative,  {\frac {1}{\sqrt {1-z^{2}}}}, as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative  {\frac {1}{1+z^{2}}} in a geometric series and applying the integral definition above (see Leibniz series).

{\displaystyle \arcsin(z)=z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(2n-1)!!}{(2n)!!}}\cdot {\frac {z^{2n+1}}{2n+1}}=\sum _{n=0}^{\infty }{\frac {{\binom {2n}{n}}z^{2n+1}}{4^{n}(2n+1)}}\,;\qquad |z|\leq 1}
{\displaystyle \arctan(z)=z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq i,-i}

Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example,  {\displaystyle \arccos x=\pi /2-\arcsin x}, {\displaystyle \operatorname {arccsc} x=\arcsin(1/x)}, and so on. Another series is given by:

{\displaystyle 2\left(\arcsin {\frac {x}{2}}\right)^{2}=\sum _{n=1}^{\infty }{\frac {x^{2n}}{n^{2}{\binom {2n}{n}}}}} [7]

Leonhard Euler found a more efficient series for the arctangent, which is:

\arctan(z)={\frac {z}{1+z^{2}}}\sum _{n=0}^{\infty }\prod _{k=1}^{n}{\frac {2kz^{2}}{(2k+1)(1+z^{2})}}\,.

(Notice that the term in the sum for n = 0 is the empty product which is 1.)
Alternatively, this can be expressed:

\arctan z=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}\;{\frac {z^{2n+1}}{(1+z^{2})^{n+1}}}

Alternating series test

In mathematical analysis, the alternating series test is the method used to prove that an alternating series with terms that decrease in absolute value is a convergent series. The test was used by Gottfried Leibniz and is sometimes known as Leibniz’s test, Leibniz’s rule, or the Leibniz criterion.

Formulation

A series of the form

{\displaystyle \sum _{n=0}^{\infty }(-1)^{n}a_{n}=a_{0}-a_{1}+a_{2}-a_{3}+\cdots \!}

where either all an are positive or all an are negative, is called an alternating series.

The alternating series test then says: if |a_n| decreases monotonically and \lim _{{n\to \infty }}a_{n}=0 then the alternating series converges.

Moreover, let L denote the sum of the series, then the partial sum

  S_k = \sum_{n=1}^k (-1)^{n-1} a_n\!

approximates L with error bounded by the next omitted term:

\left | S_k - L \right \vert \le \left | S_k - S_{k+1} \right \vert = a_{k+1}.\!

 

果不能斷言萊布尼茨已知 \frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots 乎?說不定他也知 \frac{\pi}{8} = \frac{1}{1 \cdot 3} + \frac{1}{5 \cdot 7} + \cdots ,還知 1 - \frac{\pi}{8} = \frac{1}{3 \cdot 5} + \frac{1}{7 \cdot 9} + \cdots 哩!

那麼能否推論歐拉不只讀過萊布尼茨的著作,尚且看過

Vieta’s formulas

In mathematics, Vieta’s formulas are formulas that relate the coefficients of a polynomial to sums and products of its roots. Named after François Viète (more commonly referred to by the Latinised form of his name, Franciscus Vieta), the formulas are used specifically in algebra.

The laws

Basic formulas

Any general polynomial of degree n

  P(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}\,

(with the coefficients being real or complex numbers and an ≠ 0) is known by the fundamental theorem of algebra to have n (not necessarily distinct) complex roots x1x2, …, xn. Vieta’s formulas relate the polynomial’s coefficients { ak } to signed sums and products of its roots { xi } as follows:

{\displaystyle {\begin{cases}x_{1}+x_{2}+\dots +x_{n-1}+x_{n}=-{\dfrac {a_{n-1}}{a_{n}}}\\(x_{1}x_{2}+x_{1}x_{3}+\cdots +x_{1}x_{n})+(x_{2}x_{3}+x_{2}x_{4}+\cdots +x_{2}x_{n})+\cdots +x_{n-1}x_{n}={\dfrac {a_{n-2}}{a_{n}}}\\{}\quad \vdots \\x_{1}x_{2}\dots x_{n}=(-1)^{n}{\dfrac {a_{0}}{a_{n}}}.\end{cases}}}

Equivalently stated, the (n − k)th coefficient ank is related to a signed sum of all possible subproducts of roots, taken k-at-a-time:

  \sum _{1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n}x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}=(-1)^{k}{\frac {a_{n-k}}{a_{n}}}

for k = 1, 2, …, n (where we wrote the indices ik in increasing order to ensure each subproduct of roots is used exactly once).

The left hand sides of Vieta’s formulas are the elementary symmetric functions of the roots.

……

Newton’s identities

In mathematics, Newton’s identities, also known as the Newton–Girard formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.

Mathematical statement

Formulation in terms of symmetric polynomials

Let x1, …, xn be variables, denote for k ≥ 1 by pk(x1, …, xn) the k-th power sum:

  p_{k}(x_{1},\ldots ,x_{n})=\sum \nolimits _{i=1}^{n}x_{i}^{k}=x_{1}^{k}+\cdots +x_{n}^{k},

and for k ≥ 0 denote by ek(x1, …, xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so

  {\begin{aligned}e_{0}(x_{1},\ldots ,x_{n})&=1,\\e_{1}(x_{1},\ldots ,x_{n})&=x_{1}+x_{2}+\cdots +x_{n},\\e_{2}(x_{1},\ldots ,x_{n})&=\textstyle \sum _{1\leq i<j\leq n}x_{i}x_{j},\\e_{n}(x_{1},\ldots ,x_{n})&=x_{1}x_{2}\cdots x_{n},\\e_{k}(x_{1},\ldots ,x_{n})&=0,\quad {\text{for}}\ k>n.\\\end{aligned}}

Then Newton’s identities can be stated as

ke_{k}(x_{1},\ldots ,x_{n})=\sum _{i=1}^{k}(-1)^{i-1}e_{k-i}(x_{1},\ldots ,x_{n})p_{i}(x_{1},\ldots ,x_{n}),

valid for all n ≥ 1 and k ≥ 1.

Also, one has

0=\sum _{i=k-n}^{k}(-1)^{i-1}e_{k-i}(x_{1},\ldots ,x_{n})p_{i}(x_{1},\ldots ,x_{n}),

for all k > n ≥ 1.

Concretely, one gets for the first few values of k:

{\begin{aligned}e_{1}(x_{1},\ldots ,x_{n})&=p_{1}(x_{1},\ldots ,x_{n}),\\2e_{2}(x_{1},\ldots ,x_{n})&=e_{1}(x_{1},\ldots ,x_{n})p_{1}(x_{1},\ldots ,x_{n})-p_{2}(x_{1},\ldots ,x_{n}),\\3e_{3}(x_{1},\ldots ,x_{n})&=e_{2}(x_{1},\ldots ,x_{n})p_{1}(x_{1},\ldots ,x_{n})-e_{1}(x_{1},\ldots ,x_{n})p_{2}(x_{1},\ldots ,x_{n})+p_{3}(x_{1},\ldots ,x_{n}).\\\end{aligned}}

The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has

  {\begin{aligned}e_{1}&=p_{1},\\2e_{2}&=e_{1}p_{1}-p_{2},\\3e_{3}&=e_{2}p_{1}-e_{1}p_{2}+p_{3},\\4e_{4}&=e_{3}p_{1}-e_{2}p_{2}+e_{1}p_{3}-p_{4},\\\end{aligned}}

and so on; here the left-hand sides never become zero. These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as

  {\begin{aligned}p_{1}&=e_{1},\\p_{2}&=e_{1}p_{1}-2e_{2},\\p_{3}&=e_{1}p_{2}-e_{2}p_{1}+3e_{3},\\p_{4}&=e_{1}p_{3}-e_{2}p_{2}+e_{3}p_{1}-4e_{4},\\&{}\ \ \vdots \end{aligned}}

In general, we have

p_{k}(x_{1},\ldots ,x_{n})=(-1)^{k-1}ke_{k}(x_{1},\ldots ,x_{n})+\sum _{i=1}^{k-1}(-1)^{k-1+i}e_{k-i}(x_{1},\ldots ,x_{n})p_{i}(x_{1},\ldots ,x_{n}),

valid for all n ≥ 1 and k ≥ 1.

Also, one has

p_{k}(x_{1},\ldots ,x_{n})=\sum _{i=k-n}^{k-1}(-1)^{k-1+i}e_{k-i}(x_{1},\ldots ,x_{n})p_{i}(x_{1},\ldots ,x_{n}),

for all k > n ≥ 1.

Application to the roots of a polynomial

The polynomial with roots xi may be expanded as

  \prod _{i=1}^{n}\left(x-x_{i}\right)=\sum _{k=0}^{n}(-1)^{n+k}e_{n-k}x^{k},

where the coefficients  e_{k}(x_{1},\ldots ,x_{n}) are the symmetric polynomials defined above. Given the power sums of the roots

  p_{k}(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}x_{i}^{k},

the coefficients of the polynomial with roots  x_{1},\ldots ,x_{n} may be expressed recursively in terms of the power sums as

  {\begin{aligned}e_{0}&=1,\\e_{1}&=p_{1},\\e_{2}&={\frac {1}{2}}(e_{1}p_{1}-p_{2}),\\e_{3}&={\frac {1}{3}}(e_{2}p_{1}-e_{1}p_{2}+p_{3}),\\e_{4}&={\frac {1}{4}}(e_{3}p_{1}-e_{2}p_{2}+e_{1}p_{3}-p_{4}),\\&{}\ \ \vdots \end{aligned}}

Formulating polynomial this way is useful in using the method of Delves and Lyness[1] to find the zeros of an analytic function.

 

呀??突然心領神會,一日飛來之筆,於是用

假設 y = \sin(x) ,所以 \frac{\sin(x)}{y} = 1 , 因此 0 = 1 - \frac{\sin(x)}{y} = 1 - \frac{x}{y} + \frac{x^3}{3! y} - \frac{x^5}{5! y} + \cdots

如果 A 是滿足 \sin(A) = y 的最小正值,那麼 A + 2 n \pi-\pi - A + 2 n \pi 都是其根。根之因子式就可以表示為

0 = \left( 1 - \frac{x}{A} \right)  \left( 1 - \frac{x}{- \pi - A} \right) \cdots

將兩種無限代數表達式關聯起來,這等奇怪思路也!!

講借著牛頓恆等式推導根之多次倒數和

\frac{1}{A} + \frac{1}{\pi -A} + \frac{1}{2 \pi +A} + \cdots - \frac{1}{\pi + A} - \frac{1}{2 \pi - A} - \frac{1}{3 \pi + A} - \cdots =  \frac{1}{y}

\frac{1}{A^2} + \frac{1}{{(\pi -A)}^2} + \frac{1}{{(2 \pi +A)}^2} + \cdots + \frac{1}{{(\pi + A)}^2} + \frac{1}{{(2 \pi - A)}^2} - \frac{1}{{(3 \pi + A)}^2} + \cdots =  \frac{1}{y^2}

\cdots

難道不是創舉嗎?若說只取

\sin(\frac{\pi}{2}}) = 1 ,計算出

\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots

\frac{{\pi}^2}{8} = 1 + \frac{1}{3^2} + \frac{1}{5^2} + \frac{1}{7^2} + \cdots

……

就以為得到了要旨!!怕是尚未明白數學方法和精神的吧??

 

 

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰白努利 □○《九上》

略讀 John D. Blanton 先生之

《Foundations Of Differential Calculus: 1st (first) Edition.》

後,發現不巧只翻譯了歐拉

Institutiones calculi differentialis

巨著的第一部份。所以在此藉著莫里斯·克萊因 

Morris Kline

Morris Kline (May 1, 1908 – June 10, 1992) was a Professor of Mathematics, a writer on the history, philosophy, and teaching of mathematics, and also a popularizer of mathematical subjects.

教授一九八三在數學雜誌上發表之

Euler and Infinte Series

文章說說歐拉『形式操作』觀點的了。同時趁機介紹讀者認識這位『數學教育』知名的批評者 Kline 先生︰

Critique of mathematics education

Morris Kline was a protagonist in the curriculum reform in mathematics education that occurred in the second half of the twentieth century, a period including the programs of the new math. An article by Kline in 1956 in The Mathematics Teacher, the main journal of the National Council of Teachers of Mathematics, was titled “Mathematical texts and teachers: a tirade“. Calling out teachers blaming students for failures, he wrote “There is a student problem, but there are also three other factors which are responsible for the present state of mathematical learning, namely, the curricula, the texts, and the teachers.” The tirade touched a nerve, and changes started to happen. But then Kline switched to being a critic of some of the changes. In 1958 he wrote “Ancients versus moderns: a new battle of the books“. The article was accompanied with a rebuttal by Albert E. Meder Jr. of Rutgers University.[2] He says, “I find objectionable: first, vague generalizations, entirely undocumented, concerning views held by ‘modernists’, and second, the inferences drawn from what has not been said by the ‘modernists’.” By 1966 Kline proposed an eight-page high school plan.[3] The rebuttal for this article was by James H. Zant; it asserted that Kline had “a general lack of knowledge of what was going on in schools with reference to textbooks, teaching, and curriculum.” Zant criticized Kline’s writing for “vagueness, distortion of facts, undocumented statements and overgeneralization.”

In 1966[4] and 1970[5] Kline issued two further criticisms. In 1973 St. Martin’s Press contributed to the dialogue by publishing Kline’s critique, Why Johnny Can’t Add: the Failure of the New Math. Its opening chapter is a parody of instruction as students’ intuitions are challenged by the new jargon. The book recapitulates the debates from Mathematics Teacher, with Kline conceding some progress: He cites Howard Fehr of Columbia University who sought to unify the subject through its general concepts, sets, operations, mappings, relations, and structure in the Secondary School Mathematics Curriculum Improvement Study.

In 1977 Kline turned to undergraduate university education; he took on the academic mathematics establishment with his Why the Professor Can’t Teach: the dilemma of university education. Kline argues that onus to conduct research misdirects the scholarly method that characterizes good teaching. He lauds scholarship as expressed by expository writing or reviews of original work of others. For scholarship he expects critical attitudes to topics, materials and methods. Among the rebuttals are those by D.T. Finkbeiner, Harry Pollard, and Peter Hilton.[6] Pollard conceded, “The society in which learning is admired and pursued for its own sake has disappeared.” The Hilton review was more direct: Kline has “placed in the hand of enemies…[a] weapon”. Having started in 1956 as an agitator for change in mathematics education, he became a critic of some trends. Skilled expositor that he was, editors frequently felt his expressions were best tempered with rebuttal.

In considering what motivated Morris Kline to protest, consider Professor Meder’s opinion:[7]I am wondering whether in point of fact, Professor Kline really likes mathematics […] I think that he is at heart a physicist, or perhaps a ‘natural philosopher’, not a mathematician, and that the reason he does not like the proposals for orienting the secondary school college preparatory mathematics curriculum to the diverse needs of the twentieth century by making use of some concepts developed in mathematics in the last hundred years or so is not that this is bad mathematics, but that it minimizes the importance of physics.

It might appear so, as Kline recalls E. H. Moore’s recommendation to combine science and mathematics at the high school level.[8] But closer reading shows Kline calling mathematics a “part of man’s efforts to understand and master his world“, and he sees that role in a broad spectrum of sciences.

 

為著方便讀者閱讀理解克萊因教授文章,勉力註解梳理一番。

數學上如何考慮

S = 1 - 1 + 1 - 1 + \cdots

計算所引起的爭議呢?

(1) \ S = 0 = (1 - 1) + (1 - 1) + \cdots

(2) \ S = 1 = 1 - (1 - 1) - (1 - 1) - \cdots

有限項『代數表達式』建立之法則 ── 比方說『加括號』───,能否擴張於無窮 \infty 耶?就今日所知『項次安排』有條件也︰

Rearrangements

For any series, we can create a new series by rearranging the order of summation. A series is unconditionally convergent if any rearrangement creates a series with the same convergence as the original series. Absolutely convergent series are unconditionally convergent. But the Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence.[1] The general principle is that addition of infinite sums is only commutative for absolutely convergent series.

For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.

As another example, we know that

  \ln(2)=\sum _{{n=1}}^{\infty }{\frac {(-1)^{{n+1}}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\cdots .


墨卡托級數

數學內,墨卡托級數(Mercator series)或者牛頓-墨卡托級數(Newton–Mercator series)是一個自然對數泰勒級數

{\displaystyle \ln(1+x)\;=\;x\,-\,{\frac {x^{2}}{2}}\,+\,{\frac {x^{3}}{3}}\,-\,{\frac {x^{4}}{4}}\,+\,\cdots .}

使用大寫sigma表示則為

{\displaystyle \ln(1+x)\;=\;\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}x^{n}.}

當 −1 < x ≤ 1時,此級數收斂於自然對數(加了1)。

───

But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for  {\frac {1}{2}}\ln(2):

{\begin{aligned}&{}\quad \left(1-{\frac {1}{2}}\right)-{\frac {1}{4}}+\left({\frac {1}{3}}-{\frac {1}{6}}\right)-{\frac {1}{8}}+\left({\frac {1}{5}}-{\frac {1}{10}}\right)-{\frac {1}{12}}+\cdots \\[8pt]&={\frac {1}{2}}-{\frac {1}{4}}+{\frac {1}{6}}-{\frac {1}{8}}+{\frac {1}{10}}-{\frac {1}{12}}+\cdots \\[8pt]&={\frac {1}{2}}\left(1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-{\frac {1}{6}}+\cdots \right)={\frac {1}{2}}\ln(2).\end{aligned}}

 

若問 (1), \ (2) 都『加無窮括號』與

(3) \ S = 1 - (1 - 1 + 1 - \cdots) = 1 - S

加括號法相同嗎??恐有懸念乎!!

現今數學重視歐拉『級數變換』想法︰

Ordinary generating function

The transform connects the generating functions associated with the series. For the ordinary generating function, let

  f(x)=\sum_{n=0}^\infty a_n x^n

and

  g(x)=\sum_{n=0}^\infty s_n x^n

then

  {\displaystyle g(x)=(Tf)(x)={\frac {1}{1-x}}f\left(-{\frac {x}{1-x}}\right).}

Euler transform

The relationship between the ordinary generating functions is sometimes called the Euler transform. It commonly makes its appearance in one of two different ways. In one form, it is used to accelerate the convergence of an alternating series. That is, one has the identity

  \sum_{n=0}^\infty (-1)^n a_n = \sum_{n=0}^\infty (-1)^n \frac {\Delta^n a_0} {2^{n+1}}

which is obtained by substituting x=1/2 into the last formula above. The terms on the right hand side typically become much smaller, much more rapidly, thus allowing rapid numerical summation.

 

推廣了歐拉『可加性』概念︰

Euler summation

In the mathematics of convergent and divergent series, Euler summation is a summability method. That is, it is a method for assigning a value to a series, different from the conventional method of taking limits of partial sums. Given a series ∑an, if its Euler transform converges to a sum, then that sum is called the Euler sum of the original series. As well as being used to define values for divergent series, Euler summation can be used to speed the convergence of series.

Euler summation can be generalized into a family of methods denoted (E, q), where q ≥ 0. The (E, 1) sum is the ordinary Euler sum. All of these methods are strictly weaker than Borel summation; for q > 0 they are incomparable with Abel summation.

Definition

For some value y we may define the Euler sum (if it converges for that value of y) corresponding to a particular formal summation as:

{\displaystyle _{E_{y}}\,\sum _{j=0}^{\infty }a_{j}:=\sum _{i=0}^{\infty }{\frac {1}{(1+y)^{i+1}}}\sum _{j=0}^{i}{\binom {i}{j}}y^{j+1}a_{j}.}

If the formal sum actually converges, an Euler sum will equal it. But Euler summation is particularly used to accelerate the convergence of alternating series and sometimes it can give a useful meaning to divergent sums.

To justify the approach notice that for interchanged sum, Euler’s summation reduces to the initial series, because

  {\displaystyle y^{j+1}\sum _{i=j}^{\infty }{\binom {i}{j}}{\frac {1}{(1+y)^{i+1}}}=1.}

因為

\sum \limits_{n=k}^{\infty} \left( \begin{array}{ccc} n \\ k \end{array} \right) y^n = \frac{y^k}{{(1-y)}^{k+1}}

假設 z= \frac{1}{1+y} ,於是 y = \frac{1-z}{z}

因此

y^{j+1} \sum \limits_{i=j}^{\infty} \left( \begin{array}{ccc} i \\ j \end{array} \right) \frac{1}{{(1+y)}^{i+1}}

改寫為

= { \left( \frac{1-z}{z} \right) }^ {j+1} \sum \limits_{i=j}^{\infty} \left( \begin{array}{ccc} i \\ j \end{array} \right) z^{j+1}

= { \left( \frac{1-z}{z} \right) }^ {j+1} \cdot z \cdot \frac{z^j}{{(1-z)}^{j+1}}

=1

Generating functions

Ordinary generating functions

For a fixed n, the ordinary generating function of the sequence  {n \choose 0},\;{n \choose 1},\;{n \choose 2},\;\ldots is:

  {\displaystyle \sum _{k=0}^{\infty }{n \choose k}x^{k}=(1+x)^{n}.}

For a fixed k, the ordinary generating function of the sequence  {0 \choose k},\;{1 \choose k},\;{2 \choose k},\;\ldots is:

  \sum _{n=k}^{\infty }{n \choose k}y^{n}={\frac {y^{k}}{(1-y)^{k+1}}}.

The bivariate generating function of the binomial coefficients is:

  {\displaystyle \sum _{n=0}^{\infty }\sum _{k=0}^{n}{n \choose k}x^{k}y^{n}={\frac {1}{1-y-xy}}.}

Another bivariate generating function of the binomial coefficients, which is symmetric, is:

{\displaystyle \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }{n+k \choose k}x^{k}y^{n}={\frac {1}{1-x-y}}.}

Exponential generating function

A symmetric exponential bivariate generating function of the binomial coefficients is:

{\displaystyle \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }{n+k \choose k}{\frac {x^{k}y^{n}}{(n+k)!}}=e^{x+y}.}

This method itself cannot be improved by iterated application, as

  _{{E_{{y_{1}}}}}{}_{{E_{{y_{2}}}}}\sum =\,_{{E_{{{\frac {y_{1}y_{2}}{1+y_{1}+y_{2}}}}}}}\sum .

 

終究不要忘記尚有許多種未必相容之『求和法』觀點哩︰

發散級數

發散級數指(按柯西意義下)不收斂級數。如級數  1 + 2 + 3 + 4 + \cdots  1 - 1 + 1 - 1 + \cdots

但在實際的數學研究及物理等其它學科的應用中,經常需對發散級數進行運算,於是數學家們便給發散級數定義各種不同的「和」,如切薩羅和阿貝爾和歐拉和等,使對收斂級數求得的這些和仍然不變,而對某些發散級數,這種和仍然存在。

各種求和法

切薩羅和

對於級數  \sum_{n=1}^{\infty}a_n,令  s_n = a_1 + \cdots + a_n為它的部分和,而  t_n = \frac{s_1 + \cdots + s_n}{n}。如果  t_n \rightarrow s,則稱這個級數的切薩羅和為  s

阿貝爾和

如果冪級數 \sum_{n=0}^{\infty}a_n x^n  |x|<1收斂,並且 \lim_{x \rightarrow 1^- }\sum_{n=0}^{\infty}a_n x^n = s,則稱級數   \sum_{n=0}^{\infty}a_n的阿貝爾和為s。

拉馬努金求和約定

如果指數母函數  \sum_{n=1}^{\infty}a_n e^{-nz}的收斂區域非空,且它可以解析延拓複平面上的亞純函數,它的洛朗級數的零次係數就等於級數  \sum_{n=1}^{\infty}a_n的拉馬努金和[1]

例如,我們有以下級數的拉馬努金和:

  1 + 2 + 3 + 4 + \cdots = -\frac{1}{12}(\Re).
  1 + 1 + 1 + 1 + \cdots = -\frac{1}{2}(\Re).
  1 - 1 + 1 - 1 + \cdots = \frac{1}{2}(\Re).

 

那麼歐拉認為 \infty < -1 ,無限大\infty 像零 0 一般分隔著『正、負』數,有理耶!!無理哉??

 

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰白努利 □○《八》

派生碼訊

子 鼠

王之渙‧登鸛雀樓

白日依山盡,黃河入海流;
欲窮千里目,更上一層樓。

黑水智︰ 天地如風箱,開關司啟閉,陰 ䷁ 陽 ䷀ 之情見矣。伏羲氏之大易理則,孤虛者的邏輯宇宙,布林代數邏輯電路的數位設計之國度。

派未知何年何月,有一

孤虛者言︰

物有無者,非真假也。苟日新,日日新,又日新。真假者,物之論也。論也者,當或不當而已矣。故世有孤虛者,言有孤虛論。孤虛何謂也?甲乙孤虛,言不得全真也,索其孤其虛而已矣。天地孤虛 ,去其上下也,善惡孤虛,何得善惡並真乎?是故孤虛論全矣!

其法曰︰物物孤虛,言物之非也;孤虛之孤虛,此孤虛 之非也。使甲與乙並,此甲乙孤虛之非也,強使之或,乃非甲非乙之孤虛也。若云由此及彼,雖言之鑿鑿,若非彼與此之孤虛,无能以斷疑是也 !!

假使依據孤虛 ── Sheffer 豎線 ──所說則︰

P\sim P = P \mid P

PQP \cdot Q = P \wedge Q = (P \mid Q) \mid (P \mid Q)

PQP + Q = P \vee Q = (P \mid P) \mid (Q \mid Q)

PQP \rightarrow Q = P \mid (Q \mid Q)

─ 摘自《M♪o 之學習筆記本《子》開關︰【黑水智】數位之源

 

並非邏輯太困難,所以難以掌握。有時只是符號陌生,講法不通俗使然。思維常因習以為常,故有定勢,以至常會忽略某些細節未能貫通。兩個無窮級數

\sum \limits_{n=0}^{\infty} a_n\sum \limits_{m=0}^{\infty} b_m

的乘積

(a_0 + a_1 + a_2 + \cdots + a_n + \cdots ) \cdot (b_0 + b_1 + b_2 + \cdots + b_m + \cdots )

雖說是從有限級數乘法展開推廣而來。假使將之表述成

柯西乘積

數學上,以法國數學家奧古斯丁·路易·柯西命名的柯西乘積,是指兩組數列  a_{n},b_{n}的離散卷積

  c_{n}=\sum _{{k=0}}^{n}a_{k}b_{{n-k}}.

該數列乘積被認為是自然數  R[\mathbb{N} ]的半群環的元素。

級數

一個特別重要的例子是考慮兩個嚴格的形式級數(不需要收斂  a_{n},b_{n}

  \sum _{{n=0}}^{\infty }a_{n},\qquad \sum _{{n=0}}^{\infty }b_{n},

一般地,對於實數複數柯西乘積定義為如下的離散卷積形式:

  \left(\sum _{{n=0}}^{\infty }a_{n}\right)\cdot \left(\sum _{{n=0}}^{\infty }b_{n}\right)=\sum _{{n=0}}^{\infty }c_{n},

這裡 c_{n}=\sum _{{k=0}}^{n}a_{k}b_{{n-k}},\,n=0,1,2,\ldots

「形式」是指我們對級數運算時不考慮是否收斂,參見形式冪級數

人們希望,通過對兩組級數做實際卷積的有限和的類推,得到無窮級數

  \sum _{{n=0}}^{\infty }c_{n}

等於如下乘積:

  \left(\sum _{{n=0}}^{\infty }a_{n}\right)\left(\sum _{{n=0}}^{\infty }b_{n}\right)

就如同兩個數列的和是有限範圍一樣做乘法。

在充分良態well-behaved)的情況下,上述式子成立。而更重要的一點,儘管這兩個無窮級數可能不收斂,它們的柯西乘積仍可能存在。

 

務須注意『卷積』的意義。否則恐有數學語言『誤用』之嫌。

Definitions

The Cauchy product may apply to infinite series[1][2][3][4][5][6][7][8][9][10][11] or power series.[12][13] When people apply it to finite sequences[14] or finite series, it is by abuse of language: they actually refer to discrete convolution.

Convergence issues are discussed in the next section.

Discrete convolution

For complex-valued functions f, g defined on the set Z of integers, the discrete convolution of f and g is given by:[9]

  {\displaystyle {\begin{aligned}(f*g)[n]&\ {\stackrel {\mathrm {def} }{=}}\ \sum _{m=-\infty }^{\infty }f[m]\,g[n-m]\\&=\sum _{m=-\infty }^{\infty }f[n-m]\,g[m].\end{aligned}}} (commutativity)

The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences.

Thus when g has finite support in the set  \{-M,-M+1,\dots ,M-1,M\} (representing, for instance, a finite impulse response), a finite summation may be used:[10]

  (f*g)[n]=\sum _{m=-M}^{M}f[n-m]g[m].

───

 

實因有限級數的乘法

(a_0 + a_1 + a_2 + \cdots + a_n ) \cdot (b_0 + b_1 + b_2 + \cdots + b_m)

既無規定的『次序』,所謂 n, m 也不過『指標』罷了,叫什麼名字並不重要。這可與有限序列捲積之內涵根本不同。

因此假借『冪級數』想像比較直覺的吧。

Cauchy product of two infinite series

Let  \textstyle \sum_{i=0}^\infty a_i and  \textstyle \sum_{j=0}^\infty b_j be two infinite series with complex terms. The Cauchy product of these two infinite series is defined by a discrete convolution as follows:

  \left(\sum_{i=0}^\infty a_i\right) \cdot \left(\sum_{j=0}^\infty b_j\right) = \sum_{k=0}^\infty c_k     where     c_k=\sum_{l=0}^k a_l b_{k-l}.

Cauchy product of two power series

Consider the following two power series

  \sum_{i=0}^\infty a_i x^i     and      \sum_{j=0}^\infty b_j x^j

with complex coefficients  \{a_{i}\} and  \{b_j\}. The Cauchy product of these two power series is defined by a discrete convolution as follows:

  \left(\sum_{i=0}^\infty a_i x^i\right) \cdot \left(\sum_{j=0}^\infty b_j x^j\right) = \sum_{k=0}^\infty c_k x^k     where     c_k=\sum_{l=0}^k a_l b_{k-l}.

 

如是,柯西乘積的意義可以彰顯

\left( \sum \limits_{i=0}^{\infty} a_i x^i \right) \left( \sum \limits_{j=0}^{\infty} b_j x^j \right) = \sum \limits_{k=0}^{\infty} c_k  x^k

此指『等冪』 k = l + (k-l)x^k 的係數 c_k 可由 \left( \sum \limits_{i=0}^{\infty} a_i x^i \right) 貢獻 a_l\left( \sum \limits_{j=0}^{\infty} b_j x^j \right) 貢獻 b_{k-l} 得出︰

= \sum \limits_{k=0}^{\infty} \left( \sum \limits_{l=0}^{k} a_l b_{k-l} \right) x^k

當然也可以記作 k=(k-l)+l ,將之改寫成︰

= \sum \limits_{k=0}^{\infty} \left( \sum \limits_{l=0}^{k} a_{k-l} b_l \right) x^k 哩。

反演 \sum \limits_{k=0}^{\infty} c_k  x^k = \left( \sum \limits_{i=0}^{\infty} a_i x^i \right) \left( \sum \limits_{j=0}^{\infty} b_j x^j \right) 時,小心『指標算術』勒。

那麼白努利多項式

B_n (x) = \sum \limits_{k=0}^{n} \left( \begin{array}{ccc} n \\ k \end{array} \right) b_{n-k} x^k

之生成函數可以推導如下矣︰

\sum \limits_{n=0}^{\infty} B_n (x) \frac{t^n}{n!}

= \sum \limits_{n=0}^{\infty}  \left( \sum \limits_{k=0}^{n} \left( \begin{array}{ccc} n \\ k \end{array} \right) b_{n-k} x^k \right) \frac{t^n}{n!}

= \sum \limits_{n=0}^{\infty} \left( \sum \limits_{k=0}^{n} \frac{n!}{(n-k)! k!} b_{n-k} t^{n-k} x^k \frac{t^k}{n!} \right)

= \sum \limits_{n=0}^{\infty} \left( \sum \limits_{k=0}^{n} b_{n-k} \frac{t^{n-k}}{(n-k)!} \cdot \frac{{(xt)}^k}{k!} \right)

= \left( \sum \limits_{i=0}^{\infty} b_i \frac{t^i}{i!} \right) \cdot \left( \sum \limits_{j=0}^{\infty} \frac{{(xt)}^j}{j!} \right)

= \frac{t}{e^t - 1} \cdot e^{xt}

 

 

 

 

 

 

 

 

 

時間序列︰生成函數‧漸近展開︰白努利 □○《七》

等冪求和公式在

\sum \limits_{k = 1}^n k^p = \frac 1 {p + 1} \sum \limits_{i = 0}^p \left({-1}\right)^i \binom {p + 1} i B_i n^{p + 1 - i}

= \frac{1}{p+1} \left( \binom {p + 1}0 B_0 n^{p+1}  - \binom {p + 1} 1 B_1 n^p + \cdots + \binom {p + 1} {2k} B_{2k}  n^{p+1-2k}  + \cdots \right)

,呼喚白努利『多項式』快出來!

Bernoulli polynomials

In mathematics, the Bernoulli polynomials occur in the study of many special functions and in particular the Riemann zeta function and the Hurwitz zeta function. This is in large part because they are an Appell sequence, i.e. a Sheffer sequence for the ordinary derivative operator. Unlike orthogonal polynomials, the Bernoulli polynomials are remarkable in that the number of crossings of the x-axis in the unit interval does not go up as the degree of the polynomials goes up. In the limit of large degree, the Bernoulli polynomials, appropriately scaled, approach the sine and cosine functions.

This article also discusses the Bernoulli polynomials and the related Euler polynomials, and the Bernoulli and Euler numbers.

Bernoulli polynomials

Representations

The Bernoulli polynomials Bn admit a variety of different representations. Which among them should be taken to be the definition may depend on one’s purposes.

Explicit formula

B_n(x) = \sum_{k=0}^n {n \choose k} b_{n-k} x^k,

for n ≥ 0, where bk are the Bernoulli numbers.

Generating functions

The generating function for the Bernoulli polynomials is

\frac{t e^{xt}}{e^t-1}= \sum_{n=0}^\infty B_n(x) \frac{t^n}{n!}.

The generating function for the Euler polynomials is

\frac{2 e^{xt}}{e^t+1}= \sum_{n=0}^\infty E_n(x) \frac{t^n}{n!}.

Representation by a differential operator

The Bernoulli polynomials are also given by

B_n(x)={D \over e^D -1} x^n

where D = d/dx is differentiation with respect to x and the fraction is expanded as a formal power series. It follows that

\int _a^x B_n (u) ~du = \frac{B_{n+1}(x) - B_{n+1}(a)}{n+1} ~.

cf. integrals below.

Representation by an integral operator

The Bernoulli polynomials are the unique polynomials determined by

  \int_x^{x+1} B_n(u)\,du = x^n.

The integral transform

  (Tf)(x) = \int_x^{x+1} f(u)\,du

on polynomials f, simply amounts to

</span

This can be used to produce the inversion formulae below.

Explicit expressions for low degrees

The first few Bernoulli polynomials are:

</span

 

欲探史實曾想讀歐拉大著,無奈語文不通方作罷︰

Institutiones calculi differentialis

Institutiones calculi differentialis (Foundations of differential calculus) is a mathematical work written in 1748 by Leonhard Euler and published in 1755 that lays the groundwork for the differential calculus. It consists of a single volume containing two internal books; there are 9 chapters in book I, and 18 in book II.

W. W. Rouse Ball (1888) writes that “this is the first textbook on the differential calculus which has any claim to be both complete and accurate, and it may be said that all modern treatises on the subject are based on it.”

Institutiones calculi differentialis

 

後聞早有英譯本,未讀不知不能說其事,乃今才曉此書網上有!

 

 待讀過再言歷史乎☆