STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧A

綻放的量天尺花,攝於夏威夷縣Kona

 

天有造父變星作標尺,地生量天尺花耐乾旱。

人創符號序無窮論次第︰

歷史點滴品味嚐︰

基本尺度細思量︰

\cdots \prec \ln(\ln(x)) \prec x^{\frac{1}{n}} \prec x \prec x^n \prec e^x \prec e^{e^x} \prec \cdots

此處 n 是大於一的自然數。

指數對數總其綱☆

倘知階乘座何處??

Rate of growth and approximations for large n

 Plot of the natural logarithm of the factorial

As n grows, the factorial n! increases faster than all polynomials and exponential functions (but slower than double exponential functions) in n.

Most approximations for n! are based on approximating its natural logarithm

{\displaystyle \ln n!=\sum _{x=1}^{n}\ln x.}

The graph of the function f(n) = ln n! is shown in the figure on the right. It looks approximately linear for all reasonable values of n, but this intuition is false. We get one of the simplest approximations for ln n! by bounding the sum with an integral from above and below as follows:

{\displaystyle \int _{1}^{n}\ln x\,dx\leq \sum _{x=1}^{n}\ln x\leq \int _{0}^{n}\ln(x+1)\,dx}

which gives us the estimate

{\displaystyle n\ln \left({\frac {n}{e}}\right)+1\leq \ln n!\leq (n+1)\ln \left({\frac {n+1}{e}}\right)+1.}

Hence ln ⁡ {\displaystyle \ln n!\sim n\ln n} (see Big O notation). This result plays a key role in the analysis of the computational complexity of sorting algorithms (see comparison sort). From the bounds on ln n! deduced above we get that

e\left({\frac {n}{e}}\right)^{n}\leq n!\leq e\left({\frac {n+1}{e}}\right)^{n+1}.

It is sometimes practical to use weaker but simpler estimates. Using the above formula it is easily shown that for all n we have  (n/3)^{n}<n!, and for all n ≥ 6 we have  n!<(n/2)^{n}.

For large n we get a better estimate for the number n! using Stirling’s approximation:

  n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.

This in fact comes from an asymptotic series for the logarithm, and n factorial lies between this and the next approximation:

  {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.

Another approximation for ln n! is given by Srinivasa Ramanujan (Ramanujan 1988)

{\displaystyle \ln n!\approx n\ln n-n+{\frac {\ln(n(1+4n(1+2n)))}{6}}+{\frac {\ln(\pi )}{2}}}

or

n!\approx {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}[1+1/(2n)+1/(8n^{2})]^{1/6}.

Both this and  {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}} give a relative error on the order of 1/n3, but Ramanujan’s is about four times more accurate. However, if we use two correction terms (as in Ramanujan’s approximation) the relative error will be of order 1/n5:

n!\approx {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\exp \left({{\frac {1}{12n}}-{\frac {1}{360n^{3}}}}\right)

【註︰ x^x = e^{x \ln(x)}  \prec e^{x^2}}

 

漸近之理已顯揚!!

一圖

 

一表

Orders of common functions

Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a positive constant and n increases without bound. The slower-growing functions are generally listed first.

NOTATION NAME EXAMPLE
  O(1) constant Determining if a binary number is even or odd; Calculating  (-1)^{n}; Using a constant-size lookup table
  O(\log \log n) double logarithmic Number of comparisons spent finding an item using interpolation search in a sorted array of uniformly distributed values
  O(\log n) logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap
  {\displaystyle O((\log n)^{c})}
{\displaystyle \scriptstyle c>1}
polylogarithmic Matrix chain ordering can be solved in polylogarithmic time on a Parallel Random Access Machine.
  O(n^{c})
{\displaystyle \scriptstyle 0<c<1}
fractional power Searching in a kd-tree
  O(n) linear Finding an item in an unsorted list or in an unsorted array; adding two n-bit integers by ripple carry
  {\displaystyle O(n\log ^{*}n)} n log-star n Performing triangulation of a simple polygon using Seidel’s algorithm, or the union–find algorithm. Note that\log ^{*}(n)={\begin{cases}0,&{\text{if }}n\leq 1\\1+\log ^{*}(\log n),&{\text{if }}n>1\end{cases}}
{\displaystyle O(n\log n)=O(\log n!)} linearithmic, loglinear, or quasilinear Performing a fast Fourier transform; Fastest possible comparison sort; heapsort and merge sort
  O(n^{2}) quadratic Multiplying two n-digit numbers by a simple algorithm; simple sorting algorithms, such as bubble sort, selection sort and insertion sort; bound on some usually faster sorting algorithms such as quicksort, Shellsort, and tree sort
  O(n^{c}) polynomial or algebraic Tree-adjoining grammar parsing; maximum matching for bipartite graphs; finding the determinant with LU decomposition
{\displaystyle L_{n}[\alpha ,c]=e^{(c+o(1))(\ln n)^{\alpha }(\ln \ln n)^{1-\alpha }}}
{\displaystyle \scriptstyle 0<\alpha <1}
L-notation or sub-exponential Factoring a number using the quadratic sieve or number field sieve
O(c^{n})
{\displaystyle \scriptstyle c>1}
exponential Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search
O(n!) factorial Solving the travelling salesman problem via brute-force search; generating all unrestricted permutations of a poset; finding the determinant with Laplace expansion; enumerating all partitions of a set

The statement {\displaystyle f(n)=O(n!)} is sometimes weakened to f(n)=O\left(n^{n}\right) to derive simpler formulas for asymptotic complexity. For any  k>0 and  c>0 O(n^{c}(\log n)^{k}) is a subset of  O(n^{c+\varepsilon }) for any  \varepsilon >0, so may be considered as a polynomial with some bigger order.

 

義自彰!!??

─── 《時間序列︰生成函數‧漸近展開︰無限大等級 IV

 

因為拉普拉斯變換在理論和實務上的重要性,故闢篇章予以特寫。

引用『無限大等級』作開始,關聯其與

\displaystyle F(s) = \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt

『存在性』之緊密也。

始於編表人之說︰

鎔鑄心思、權衡輕重、如數家珍。

觀其門道也。

Table of selected Laplace transforms

 

The following table provides Laplace transforms for many common functions of a single variable.[23][24] For definitions and explanations, see the Explanatory Notes at the end of the table.

Because the Laplace transform is a linear operator,

  • The Laplace transform of a sum is the sum of Laplace transforms of each term.
\displaystyle {\mathcal {L}}\{f(t)+g(t)\}={\mathcal {L}}\{f(t)\}+{\mathcal {L}}\{g(t)\}
  • The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function.
\displaystyle {\mathcal {L}}\{af(t)\}=a{\mathcal {L}}\{f(t)\}

Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly.

The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t).

The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems.

Function Time domain

\displaystyle F(T)={\mathcal {L}}^{-1}\{F(S)\}

Laplace s-domain

\displaystyle F(S)={\mathcal {L}}\{F(T)\}

Region of convergence Reference
unit impulse \displaystyle \delta (t) \displaystyle 1 all s inspection
delayed impulse \displaystyle \delta (t-\tau ) \displaystyle e^{-\tau s}   time shift of
unit impulse
unit step \displaystyle u(t) \displaystyle {1 \over s} Re(s) > 0 integrate unit impulse
delayed unit step \displaystyle u(t-\tau ) \displaystyle {\frac {1}{s}}e^{-\tau s} Re(s) > 0 time shift of
unit step
ramp \displaystyle t\cdot u(t) \displaystyle {\frac {1}{s^{2}}} Re(s) > 0 integrate unit
impulse twice
nth power
(for integer n)
\displaystyle t^{n}\cdot u(t) \displaystyle {n! \over s^{n+1}} Re(s) > 0
(n > −1)
Integrate unit
step n times
qth power
(for complex q)
\displaystyle t^{q}\cdot u(t) \displaystyle {\Gamma (q+1) \over s^{q+1}} Re(s) > 0
Re(q) > −1
[25][26]
nth root \displaystyle {\sqrt[{n}]{t}}\cdot u(t) \displaystyle {1 \over s^{{\frac {1}{n}}+1}}\Gamma \left({\frac {1}{n}}+1\right) Re(s) > 0 Set q = 1/n above.
nth power with frequency shift \displaystyle t^{n}e^{-\alpha t}\cdot u(t) \displaystyle {\frac {n!}{(s+\alpha )^{n+1}}} Re(s) > −α Integrate unit step,
apply frequency shift
delayed nth power
with frequency shift
\displaystyle (t-\tau )^{n}e^{-\alpha (t-\tau )}\cdot u(t-\tau ) \displaystyle {\frac {n!\cdot e^{-\tau s}}{(s+\alpha )^{n+1}}} Re(s) > −α Integrate unit step,
apply frequency shift,
apply time shift
exponential decay \displaystyle e^{-\alpha t}\cdot u(t) \displaystyle {1 \over s+\alpha } Re(s) > −α Frequency shift of
unit step
two-sided exponential decay
(only for bilateral transform)
\displaystyle e^{-\alpha |t|} \displaystyle {2\alpha \over \alpha ^{2}-s^{2}} α < Re(s) < α Frequency shift of
unit step
exponential approach \displaystyle (1-e^{-\alpha t})\cdot u(t) \displaystyle {\frac {\alpha }{s(s+\alpha )}} Re(s) > 0 Unit step minus
exponential decay
sine \displaystyle \sin(\omega t)\cdot u(t) \displaystyle {\omega \over s^{2}+\omega ^{2}} Re(s) > 0 Bracewell 1978, p. 227
cosine \displaystyle \cos(\omega t)\cdot u(t) \displaystyle {s \over s^{2}+\omega ^{2}} Re(s) > 0 Bracewell 1978, p. 227
hyperbolic sine \displaystyle \sinh(\alpha t)\cdot u(t) \displaystyle {\alpha \over s^{2}-\alpha ^{2}} Re(s) > |α| Williams 1973, p. 88
hyperbolic cosine \displaystyle \cosh(\alpha t)\cdot u(t) \displaystyle {s \over s^{2}-\alpha ^{2}} Re(s) > |α| Williams 1973, p. 88
exponentially decaying
sine wave
\displaystyle e^{-\alpha t}\sin(\omega t)\cdot u(t) \displaystyle {\omega \over (s+\alpha )^{2}+\omega ^{2}} Re(s) > −α Bracewell 1978, p. 227
exponentially decaying
cosine wave
\displaystyle e^{-\alpha t}\cos(\omega t)\cdot u(t) \displaystyle {s+\alpha \over (s+\alpha )^{2}+\omega ^{2}} Re(s) > −α Bracewell 1978, p. 227
natural logarithm \displaystyle \ln(t)\cdot u(t) \displaystyle -{1 \over s}\,\left[\ln(s)+\gamma \right] Re(s) > 0 Williams 1973, p. 88
Bessel function
of the first kind,
of order n
\displaystyle J_{n}(\omega t)\cdot u(t) \displaystyle {\frac {\left({\sqrt {s^{2}+\omega ^{2}}}-s\right)^{n}}{\omega ^{n}{\sqrt {s^{2}+\omega ^{2}}}}} Re(s) > 0
(n > −1)
Williams 1973, p. 89
Error function \displaystyle \operatorname {erf} (t)\cdot u(t) \displaystyle {\frac {1}{s}}e^{(1/4)s^{2}}\left(1-\operatorname {erf} {\frac {s}{2}}\right) Re(s) > 0 Williams 1973, p. 89
Explanatory notes:

  • t, a real number, typically represents time,
    although it can represent any independent dimension.
  • s is the complex frequency domain parameter, and Re(s) is its real part.
  • α, β, τ, and ω are real numbers.
  • n is an integer.

 

所以思接『泰勒級數』

Taylor series

Definition

The Taylor series of a real or complex-valued function f (x) that is infinitely differentiable at a real or complex number a is the power series

\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+{\frac {f'''(a)}{3!}}(x-a)^{3}+\cdots ,

which can be written in the more compact sigma notation as

\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n},

where n! denotes the factorial of n and f(n)(a) denotes the nth derivative of f evaluated at the point a. The derivative of order zero of f is defined to be f itself and (xa)0 and 0! are both defined to be 1. When a = 0, the series is also called a Maclaurin series.[1]

 

念及『解析函數』

Analytic functions

The function e(−1/x2) is not analytic at x = 0: the Taylor series is identically 0, although the function is not.

 

If f (x) is given by a convergent power series in an open disc (or interval in the real line) centered at b in the complex plane, it is said to be analytic in this disc. Thus for x in this disc, f is given by a convergent power series

\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-b)^{n}.

Differentiating by x the above formula n times, then setting x = b gives:

\displaystyle {\frac {f^{(n)}(b)}{n!}}=a_{n}

and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disc centered at b if and only if its Taylor series converges to the value of the function at each point of the disc.

If f (x) is equal to its Taylor series for all x in the complex plane, it is called entire. The polynomials, exponential function ex, and the trigonometric functions sine and cosine, are examples of entire functions. Examples of functions that are not entire include the square root, the logarithm, the trigonometric function tangent, and its inverse, arctan. For these functions the Taylor series do not converge if x is far from b. That is, the Taylor series diverges at x if the distance between x and b is larger than the radius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point.

Uses of the Taylor series for analytic functions include:

  1. The partial sums (the Taylor polynomials) of the series can be used as approximations of the function. These approximations are good if sufficiently many terms are included.
  2. Differentiation and integration of power series can be performed term by term and is hence particularly easy.
  3. An analytic function is uniquely extended to a holomorphic function on an open disk in the complex plane. This makes the machinery of complex analysis available.
  4. The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm).
  5. Algebraic operations can be done readily on the power series representation; for instance, Euler’s formula follows from Taylor series expansions for trigonometric and exponential functions. This result is of fundamental importance in such fields as harmonic analysis.
  6. Approximations using the first few terms of a Taylor series can make otherwise unsolvable problems possible for a restricted domain; this approach is often used in physics.

 

咀嚼旨趣也。

假設 \displaystyle f(t)=\sum _{n=0}^{\infty }  \frac {f^{(n)}(0)}{n!} t^{n}

\displaystyle \because {\mathcal {L}}\{ t^{n}\cdot u(t) \} = {n! \over s^{n+1}}

\displaystyle \therefore {\mathcal {L}}\{ f(t) \cdot u(t) \}

= \sum _{n=0}^{\infty }  \frac {f^{(n)}(0)}{n!} {\mathcal {L}}\{ t^{n}\cdot u(t) \}

=\sum _{n=0}^{\infty }  \frac {f^{(n)}(0)}{s^{n+1}}

 

權充

Relation to power series

The Laplace transform can be viewed as a continuous analogue of a power series. If a(n) is a discrete function of a positive integer n, then the power series associated to a(n) is the series

\displaystyle \sum _{n=0}^{\infty }a(n)x^{n}

where x is a real variable (see Z transform). Replacing summation over n with integration over t, a continuous version of the power series becomes

\displaystyle \int _{0}^{\infty }f(t)x^{t}\,dt

where the discrete function a(n) is replaced by the continuous one f(t).

Changing the base of the power from x to e gives

\displaystyle \int _{0}^{\infty }f(t)\left(e^{\ln {x}}\right)^{t}\,dt

For this to converge for, say, all bounded functions f, it is necessary to require that ln x < 0. Making the substitution s = ln x gives just the Laplace transform:

\displaystyle \int _{0}^{\infty }f(t)e^{-st}\,dt

In other words, the Laplace transform is a continuous analog of a power series in which the discrete parameter n is replaced by the continuous parameter t, and x is replaced by es.

 

Region of convergence

If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit

\displaystyle \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt

exists.

The Laplace transform converges absolutely if the integral

\displaystyle \int _{0}^{\infty }\left|f(t)e^{-st}\right|\,dt

exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.

The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t).[17] Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b.[18] The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini’s theorem and Morera’s theorem.

Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a.

In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral

\displaystyle F(s)=(s-s_{0})\int _{0}^{\infty }e^{-(s-s_{0})t}\beta (t)\,dt,\quad \beta (u)=\int _{0}^{u}e^{-s_{0}t}f(t)\,dt.

That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.

There are several Paley–Wiener theorems concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence.

In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) ≥ 0. As a result, LTI systems are stable provided the poles of the Laplace transform of the impulse response function have negative real part.

This ROC is used in knowing about the causality and stability of a system.

 

文本註釋爾。