STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧C

何事 JULIUS O. SMITH III 先生用一整章

Sinusoids and Exponentials

談『正弦波』呢?

 

為什麼高舉『廣義相量』大旗呦!

Generalized Complex Sinusoids

 

因為借之可統攝多種常用變換也!?

Importance of Generalized Complex Sinusoids

 

明白希爾伯特變換之重要性勒◎

Hilbert transform

In mathematics and in signal processing, the Hilbert transform is a linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t).

The Hilbert transform is important in signal processing, where it derives the analytic representation of a signal u(t). This means that the real signal u(t) is extended into the complex plane such that it satisfies the Cauchy–Riemann equations. For example, the Hilbert transform leads to the harmonic conjugate of a given function in Fourier analysis, aka harmonic analysis. Equivalently, it is an example of a singular integral operator and of a Fourier multiplier.

The Hilbert transform was originally defined for periodic functions, or equivalently for functions on the circle, in which case it is given by convolution with the Hilbert kernel. More commonly, however, the Hilbert transform refers to a convolution with the Cauchy kernel, for functions defined on the real lineR (the boundary of the upper half-plane). The Hilbert transform is closely related to the Paley–Wiener theorem, another result relating holomorphic functions in the upper half-plane and Fourier transforms of functions on the real line.

The Hilbert transform is named after David Hilbert, who first introduced the operator to solve a special case of the Riemann–Hilbert problem for holomorphic functions.

The Hilbert transform, in red, of a square wave, in blue

……

Hilbert transform in signal processing

Bedrosian’s theorem

Bedrosian’s theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or

H(f_{LP}(t)f_{HP}(t))=f_{LP}(t)H(f_{HP}(t))

where fLP and fHP are the low- and high-pass signals respectively (Schreier & Scharf 2010, 14).

Amplitude modulated signals are modeled as the product of a bandlimited “message” waveform, um(t), and a sinusoidal “carrier”:

u(t)=u_{m}(t)\cdot \cos(\omega t+\phi )

When um(t) has no frequency content above the carrier frequency,  {\frac {\omega }{2\pi }}{\text{ Hz}}, then by Bedrosian’s theorem:

H(u)(t)=u_{m}(t)\cdot \sin(\omega t+\phi ) (Bedrosian 1962)

Analytic representation

In the context of signal processing, the conjugate function interpretation of the Hilbert transform, discussed above, gives the analytic representation of a signal u(t):

u_{a}(t)=u(t)+i\cdot H(u)(t)

which is a holomorphic function in the upper half plane.

For the narrowband model (above), the analytic representation is:

{\begin{aligned}u_{a}(t)&=u_{m}(t)\cdot \cos(\omega t+\phi )+i\cdot u_{m}(t)\cdot \sin(\omega t+\phi )\\&=u_{m}(t)\cdot \left[\cos(\omega t+\phi )+i\cdot \sin(\omega t+\phi )\right]\end{aligned}}

= u_m (t) \cdot e^{i (\omega t + \phi)} (by Euler’s formula) \ (Eq.1)

This complex heterodyne operation shifts all the frequency components of um(t) above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms.

─── 《【鼎革‧革鼎】︰ RASPBIAN STRETCH 《六之 J.3‧MIR-11 》

 

絲絲相連蛛網結,鎮守中央待時動。

一絲撩撥數絲發,此波未至彼波湧。

 

『空間』平移

Shift operator

In mathematics, and in particular functional analysis, the shift operator also known as translation operator is an operator that takes a function xf(x) to its translation xf(x + a).[1] In time series analysis, the shift operator is called the lag operator.

Shift operators are examples of linear operators, important for their simplicity and natural occurrence. The shift operator action on functions of a real variable plays an important role in harmonic analysis, for example, it appears in the definitions of almost periodic functions, positive definite functions, and convolution.[2] Shifts of sequences (functions of an integer variable) appear in diverse areas such as Hardy spaces, the theory of abelian varieties, and the theory of symbolic dynamics, for which the baker’s map is an explicit representation.

Definition

Functions of a real variable

The shift operator Tt (t ∈ R) takes a function f on R to its translation ft ,

\displaystyle T^{t}f(x)=f_{t}(x)=f(x+t)~.

A practical representation of the linear operator Tt in terms of the plain derivative ddx was introduced by Lagrange,

\displaystyle T^{t}=e^{t{\frac {d}{dx}}}~,

which may be interpreted operationally through its formal Taylor expansion in t; and whose action on the monomial xn is evident by the binomial theorem, and hence on all series in x, and so all functions f(x) as above.[3] This, then, is a formal encoding of the Taylor expansion.

 

本自然!

論及『時間』困擾生?

Anticausal system

An anticausal system is a hypothetical system with outputs and internal states that depend solely on future input values. Some textbooks[1] and published research literature might define an anticausal system to be one that does not depend on past input values, allowing also for the dependence on present input values.

An acausal system is a system that is not a causal system, that is one that depends on some future input values and possibly on some input values from the past or present. This is in contrast to a causal system which depends only on current and/or past input values.[2] This is often a topic of control theory and digital signal processing (DSP).

Anticausal systems are also acausal, but the converse is not always true. An acausal system that has any dependence on past input values is not anticausal.

An example of acausal signal processing is the production of an output signal that is processed from another input signal that is recorded by looking at input values both forward and backward in time from a predefined time arbitrarily denoted as the “present” time. (In reality, that “present” time input, as well as the “future” time input values, have been recorded at some time in the past, but conceptually it can be called the “present” or “future” input values in this acausal process.) This type of processing cannot be done in real time as future input values are not yet known, but is done after the input signal has been recorded and is post-processed.

Digital room correction in some sound reproduction systems rely on acausal filters.

 

且借詞條說那事

Causal filter

In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems (including filters) that are realizable (i.e. that operate in real time) must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time \displaystyle t comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function.

An example of an anti-causal filter is a maximum phase filter, which can be defined as a stable, anti-causal filter whose inverse is also stable and anti-causal.

Example

Each component of the causal filter output begins when its stimulus begins. The outputs of the non-causal filter begin before the stimulus begins.

The following definition is a moving (or “sliding”) average of input data \displaystyle s(x) . A constant factor of 1/2 is omitted for simplicity:

\displaystyle f(x)=\int _{x-1}^{x+1}s(\tau )\,d\tau \ =\int _{-1}^{+1}s(x+\tau )\,d\tau

where x could represent a spatial coordinate, as in image processing. But if \displaystyle x represents time \displaystyle (t) , then a moving average defined that way is non-causal (also called non-realizable), because \displaystyle f(t) depends on future inputs, such as \displaystyle s(t+1) . A realizable output is

\displaystyle f(t-1)=\int _{-2}^{0}s(t+\tau )\,d\tau =\int _{0}^{+2}s(t-\tau )\,d\tau

which is a delayed version of the non-realizable output.

Any linear filter (such as a moving average) can be characterized by a function h(t) called its impulse response. Its output is the convolution

\displaystyle f(t)=(h*s)(t)=\int _{-\infty }^{\infty }h(\tau )s(t-\tau )\,d\tau .

In those terms, causality requires

\displaystyle f(t)=\int _{0}^{\infty }h(\tau )s(t-\tau )\,d\tau

and general equality of these two expressions requires h(t) = 0 for all t < 0.

Characterization of causal filters in the frequency domain

Let h(t) be a causal filter with corresponding Fourier transform H(ω). Define the function

\displaystyle g(t)={h(t)+h^{*}(-t) \over 2}

which is non-causal. On the other hand, g(t) is Hermitian and, consequently, its Fourier transform G(ω) is real-valued. We now have the following relation

\displaystyle h(t)=2\,\Theta (t)\cdot g(t)

where Θ(t) is the Heaviside unit step function.

This means that the Fourier transforms of h(t) and g(t) are related as follows

\displaystyle H(\omega )=\left(\delta (\omega )-{i \over \pi \omega }\right)*G(\omega )=G(\omega )-i\cdot {\widehat {G}}(\omega )

where \displaystyle {\widehat {G}}(\omega ) is a Hilbert transform done in the frequency domain (rather than the time domain). The sign of \displaystyle {\widehat {G}}(\omega ) may depend on the definition of the Fourier Transform.

Taking the Hilbert transform of the above equation yields this relation between “H” and its Hilbert transform:

\displaystyle {\widehat {H}}(\omega )=iH(\omega )

 

欲知詳情讀這文☆

causality

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧B下

格林函數

Green’s function

 

In mathematics, a Green’s function is the impulse response of an inhomogeneous linear differential equation defined on a domain, with specified initial conditions or boundary conditions.

Through the superposition principle for linear operator problems, the convolution of a Green’s function with an arbitrary function f (x) on that domain is the solution to the inhomogeneous differential equation for f (x). In other words, given a linear ordinary differential equation (ODE), L(solution) = source, one can first solve L(green) = δs, for each s, and realizing that, since the source is a sum of delta functions, the solution is a sum of Green’s functions as well, by linearity of L.

Green’s functions are named after the British mathematician George Green, who first developed the concept in the 1830s. In the modern study of linear partial differential equations, Green’s functions are studied largely from the point of view of fundamental solutionsinstead.

Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green’s functions take the roles of propagators.

Definition and uses

A Green’s function, G(x,s), of a linear differential operator L = L(x) acting on distributions over a subset of the Euclidean space \displaystyle \mathbb {R} ^{n} , at a point s, is any solution of

\displaystyle LG(x,s)=\delta (s-x),   (1)

where δ is the Dirac delta function. This property of a Green’s function can be exploited to solve differential equations of the form

\displaystyle Lu(x)=f(x).   (2)

If the kernel of L is non-trivial, then the Green’s function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green’s function. Green’s functions may be categorized, by the type of boundary conditions satisfied, by a Green’s function number. Also, Green’s functions in general are distributions, not necessarily proper functions.

Green’s functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, the Green’s function of the Hamiltonian is a key concept with important links to the concept of density of states.

As a side note, the Green’s function as used in physics is usually defined with the opposite sign, instead, that is,

\displaystyle LG(x,s)=\delta (x-s).

This definition does not significantly change any of the properties of the Green’s function.

If the operator is translation invariant, that is, when L has constant coefficients with respect to x, then the Green’s function can be taken to be a convolution operator, that is,

\displaystyle G(x,s)=G(x-s).

In this case, the Green’s function is the same as the impulse response of linear time-invariant system theory.

 

是何物?

理論工具、形式解!

Motivation

 

Loosely speaking, if such a function G can be found for the operator L, then, if we multiply the equation (1) for the Green’s function by f (s), and then integrate with respect to s, we obtain,

\displaystyle \int LG(x,s)f(s)\,ds=\int \delta (x-s)f(s)\,ds=f(x).
The right-hand side is now given by the equation (2) to be equal to L u(x), thus
\displaystyle Lu(x)=\int LG(x,s)f(s)\,ds.

Because the operator \displaystyle L=L(x) is linear and acts on the variable x alone (not on the variable of integration s), one may take the operator L outside of the integration on the right-hand side, yielding

\displaystyle Lu(x)=L\left(\int G(x,s)f(s)\,ds\right),

which suggests

\displaystyle u(x)=\int G(x,s)f(s)\,ds.   (3)

Thus, one may obtain the function u(x) through knowledge of the Green’s function in equation (1) and the source term on the right-hand side in equation (2). This process relies upon the linearity of the operator L.

In other words, the solution of equation (2), u(x), can be determined by the integration given in equation (3). Although f (x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green’s function G that satisfies equation (1). For this reason, the Green’s function is also sometimes called the fundamental solution associated to the operator L.

Not every operator L admits a Green’s function. A Green’s function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green’s function for a particular operator, the integral in equation (3) may be quite difficult to evaluate. However the method gives a theoretically exact result.

This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over δ(x−s)); and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutesFredholm theory.

 

遇着線性、非時變、因果系統光芒現︰

Time-varying impulse response

The time-varying impulse response h(t2,t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is

\displaystyle x(t)=\delta (t-t_{1})

where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is

\displaystyle y(t)|_{t=t_{2}}=h(t_{2},t_{1})

then the function h(t2,t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:

\displaystyle h(t_{2},t_{1})=0,t_{2}<t_{1}

The convolution integral

The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:

\displaystyle y(t)=\int _{-\infty }^{t}h(t,t')x(t')dt'=\int _{-\infty }^{\infty }h(t,t')x(t')dt'

If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h() is a function only of the time difference τ = t-t’ which is zero for τ<0 (namely t<t’). By redefinition of h() it is then possible to write the input-output relation equivalently in any of the ways,

\displaystyle y(t)=\int _{-\infty }^{t}h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )d\tau =\int _{0}^{\infty }h(\tau )x(t-\tau )d\tau

Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:

\displaystyle H(s)=\int _{0}^{\infty }h(t)e^{-st}\,dt.

In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function:

\displaystyle H(i\omega )=\int _{-\infty }^{\infty }h(t)e^{-i\omega t}dt

 

拉普拉斯原其明︰

藉著Laplace 變換之 CONVOLUTION 性質

\displaystyle (f*g)(t)=\int _{0}^{t}f(\tau )g(t-\tau )\,d\tau

\displaystyle{\mathcal {L}}\{(f*g)(t)\} = F(s)\cdot G(s)

可以得到

Y(s) = H(s) \cdot X(s)

假設 x(t) 是狄拉克 \delta (t)  函數,就是說 X(s) = 1

這時 Y(s) = H(s) ,因此 \therefore y(t) = {\mathcal {L}}^{-1} \{ H(s) \} = h(t)

 

一圖寓意勝千言︰

 

萬里道路始足下◎

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧B

承上篇,須知『線性系統』除了有

Zero state response

In electrical circuit theory, the zero state response (ZSR), also known as the forced response is the behavior or response of a circuit with initial state of zero. The ZSR results only from the external inputs or driving functions of the circuit and not from the initial state. The ZSR is also called the forced or driven response of the circuit.

The total response of the circuit is the superposition of the ZSR and the ZIR, or Zero Input Response. The ZIR results only from the initial state of the circuit and not from any external drive. The ZIR is also called the natural response, and the resonant frequencies of the ZIR are called the natural frequencies. Given a description of a system in the s-domain, the zero-state response can be described as Y(s)=Init(s)/a(s) where a(s) and Init(s) are system-specific.

 

之外,還有 ZIR Zero Input Response 哩!故爾推論不夠嚴謹也。

想那『因果性』是說︰

系統之輸出不依賴『未來』的輸入。

和系統線性與否實不必相涉呦!!

或許一例可解惑耶?

Zero state response and zero input response in integrator and differentiator circuits

One example of zero state response being used is in integrator and differentiator circuits. By examining a simple integrator circuit it can be demonstrated that when a function is put into a linear time-invariant (LTI) system, an output can be characterized by asuperposition or sum of the Zero Input Response and the zero state response.

A system can be represented as

\displaystyle f(t) System Input Output.JPG \displaystyle y(t)=y(t_{0})+\int _{t_{0}}^{t}f(\tau )d\tau

with the input \displaystyle f(t). on the left and the output \displaystyle y(t). on the right.

The output \displaystyle y(t). can be separated into a zero input and a zero state solution with

\displaystyle y(t)=\underbrace {y(t_{0})} _{Zero-input\ response}+\underbrace {\int _{t_{0}}^{t}f(\tau )d\tau } _{Zero-state\ response}.

The contributions of \displaystyle y(t_{0}) and \displaystyle f(t) to output \displaystyle y(t) are additive and each contribution \displaystyle y(t_{0}) and \displaystyle \int _{t_{0}}^{t}f(\tau )d\tau vanishes with vanishing \displaystyle y(t_{0}) and \displaystyle f(t).

This behavior constitutes a linear system. A linear system has an output that is a sum of distinct zero-input and zero-state components, each varying linearly, with the initial state of the system and the input of the system respectively.

The zero input response and zero state response are independent of each other and therefore each component can be computed independently of the other.

Zero state response in integrator and differentiator circuits

The Zero State Response \displaystyle \int _{t_{0}}^{t}f(\tau )d\tau represents the system output \displaystyle y(t) when \displaystyle y(t_{0})=0.

When there is no influence from internal voltages or currents due to previously charged components

\displaystyle y(t)=\int _{t_{0}}^{t}f(\tau )d\tau .

Zero state response varies with the system input and under zero-state conditions we could say that two independent inputs results in two independent outputs:

\displaystyle f_{1}(t) System Input Output.JPG \displaystyle y_{1}(t)

and

\displaystyle f_{2}(t) System Input Output.JPG \displaystyle y_{2}(t).

Because of linearity we can then apply the principles of superposition to achieve

\displaystyle Kf_{1}(t)+Kf_{2}(t) System Input Output.JPG \displaystyle Ky_{1}(t)+Ky_{2}(t).

Verifications of zero state response in integrator and differentiator circuits

To arrive at general equation

Simple Integrator Circuit

The circuit to the right acts as a simple integrator circuit and will be used to verify the equation \displaystyle y(t)=\int _{t_{0}}^{t}f(\tau )d\tau as the zero state response of an integrator circuit.

Capacitors have the current-voltage relation \displaystyle i(t)=C{\frac {dv}{dt}} where C is the capacitance, measured in farads, of the capacitor.

By manipulating the above equation the capacitor can be shown to effectively integrate the current through it. The resulting equation also demonstrates the zero state and zero input responses to the integrator circuit.

First, by integrating both sides of the above equation

\displaystyle \int _{a}^{b}i(t)dt=\int _{a}^{b}C{\frac {dv}{dt}}dt.

Second, by integrating the right side

\displaystyle \int _{a}^{b}i(t)dt=C[v(b)-v(a)].

Third, distribute and subtract \displaystyle Cv(a) to get

\displaystyle Cv(b)=Cv(a)+\int _{a}^{b}i(t)dt.

Fourth, divide by \displaystyle C to achieve

\displaystyle v(b)=v(a)+{\frac {1}{C}}\int _{a}^{b}i(t)dt.

By substituting \displaystyle t for \displaystyle b and to \displaystyle t_{o} for \displaystyle a and by using the dummy variable \displaystyle \tau as the variable of integration the general equation

\displaystyle v(t)=v(t_{0})+{\frac {1}{C}}\int _{t_{0}}^{t}i(\tau )d\tau

is found.

To arrive at circuit specific example

The general equation can then be used to further demonstrate this verification by using the conditions of the simple integrator circuit above.

By using the capacitance of 1 farad as shown in the integrator circuit above

\displaystyle v(t)=v(t_{0})+\int _{t_{0}}^{t}i(\tau )d\tau ,

which is the equation containing the zero input and zero state response seen at the top of the page.

To verify zero state linearity

To verify its zero state linearity set the voltage around the capacitor at time 0 equal to 0, or \displaystyle v(t_{0})=0 , meaning that there is no initial voltage. This eliminates the first term forming the equation

\displaystyle v(t)=\int _{t_{0}}^{t}i(\tau )d\tau .

In accordance with the methods of linear time-invariant systems, by putting two different inputs into the integrator circuit, \displaystyle i_{1}(t) and \displaystyle i_{2}(t) , the two different outputs

\displaystyle v_{1}(t)=\int _{t_{0}}^{t}i_{1}(\tau )d\tau

and

\displaystyle v_{2}(t)=\int _{t_{0}}^{t}i_{2}(\tau )d\tau

are found respectively.

By using the superposition principle the inputs \displaystyle i_{1}(t) and \displaystyle i_{2}(t) can be combined to get a new input

\displaystyle i_{3}(t)=K_{1}i_{1}(t)+K_{2}i_{2}(t)

and a new output

\displaystyle v_{3}(t)=\int _{t_{0}}^{t}(K_{1}i_{1}(\tau )+K_{2}i_{2}(\tau ))d\tau .

By integrating the right side of

\displaystyle v_{3}(t)=K_{1}\int _{t_{0}}^{t}i_{1}(\tau )d\tau +K_{2}\int _{t_{0}}^{t}i_{2}(\tau )d\tau ,

\displaystyle v_{3}(t)=K_{1}v_{1}(t)+K_{2}v_{2}(t)

is found, which implies the system is linear at zero state, \displaystyle v(t_{0})=0.

This entire verification example could also have been done with a voltage source in place of the current source and an inductor in place of the capacitor. We would have then been solving for a current instead of a voltage.

 

如果此時重讀『線性系統』

Linear system

A linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.

Definition

A general deterministic system can be described by an operator,  \displaystyle H , that maps an input, \displaystyle x(t) , as a function of \displaystyle t to an output, \displaystyle y(t) , a type of black box description. Linear systems satisfy the property of superposition. Given two valid inputs

\displaystyle x_{1}(t)
\displaystyle x_{2}(t)

as well as their respective outputs

\displaystyle y_{1}(t)=H\left\{x_{1}(t)\right\}
\displaystyle y_{2}(t)=H\left\{x_{2}(t)\right\}

then a linear system must satisfy

\displaystyle \alpha y_{1}(t)+\beta y_{2}(t)=H\left\{\alpha x_{1}(t)+\beta x_{2}(t)\right\}

for any scalar values \displaystyle \alpha and \displaystyle \beta .

The system is then defined by the equation \displaystyle H(x(t))=y(t) , where \displaystyle y(t) is some arbitrary function of time, and \displaystyle x(t) is the system state. Given \displaystyle y(t) and \displaystyle H , \displaystyle x(t) can be solved for.

For example, a simple harmonic oscillator obeys the differential equation:

\displaystyle m{\frac {d^{2}(x)}{dt^{2}} = - k x .

If

\displaystyle H(x(t))=m{\frac {d^{2}(x(t))}{dt^{2}}}+kx(t) ,

then \displaystyle H is a linear operator. Letting \displaystyle y(t)=0 , we can rewrite the differential equation as \displaystyle H(x(t))=y(t) , which shows that a simple harmonic oscillator is a linear system.

The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function \displaystyle x(t) in terms of unit impulses or frequency components.

Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).

Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.

A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.

Time-varying impulse response

The time-varying impulse response h(t2,t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is

\displaystyle x(t)=\delta (t-t_{1})

where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is

\displaystyle y(t)|_{t=t_{2}}=h(t_{2},t_{1})

then the function h(t2,t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:

\displaystyle h(t_{2},t_{1})=0,t_{2}<t_{1}

The convolution integral

The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:

\displaystyle y(t)=\int _{-\infty }^{t}h(t,t')x(t')dt'=\int _{-\infty }^{\infty }h(t,t')x(t')dt'

If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h() is a function only of the time difference τ = t-t’ which is zero for τ<0 (namely t<t’). By redefinition of h() it is then possible to write the input-output relation equivalently in any of the ways,

\displaystyle y(t)=\int _{-\infty }^{t}h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )d\tau =\int _{0}^{\infty }h(\tau )x(t-\tau )d\tau

Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:

\displaystyle H(s)=\int _{0}^{\infty }h(t)e^{-st}\,dt.

In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function:

\displaystyle H(i\omega )=\int _{-\infty }^{\infty }h(t)e^{-i\omega t}dt

 

文本,思考『因果關係』如何引入,自能體會乎??

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧B上

在《吃一節 TCPIP!!中》一文中我們談到了『拓撲學』 Topology 一詞源自希臘文『地點之研究』,始於歐拉柯尼斯堡的七橋問題。這門數學探討『連通性』 connectedness 、『連續性』 continuity 、以及『邊界』 boundary。它不用東西的『形狀』來作分類,而是分析在那個東西裡所有連通的點,各個連續的區域,和有哪些分別內外的邊界。假使從『拓撲學』的觀點來談『函數』的『連續性』,那麼 |f(x) - f(x_0)| < \varepsilon 就是 f(x_0) 的『鄰域』 neighborhood,而 |x-x_0| < \delta 也就是 x_0 的『鄰域』 。所以函數上『一點』的連續性是說『這個點』的所有『指定鄰域』,都有一個『實數區間』── 鄰域的另一種說法 ── 與之『對應』,『此函數』將『此區間』映射到那個『指定鄰域』裡。

然而一個函數在『某個點』的『連續性』,並不能夠『確保』在『那個點』的『斜率存在』 ── 或說『可微分性』,比方說

f(x) = \begin{cases}x & \mbox{if }x \ge 0, \\ 0 &\mbox{if }x < 0\end{cases}

,當 x > 0 時,『斜率』是 f^{'}(x) = \frac{df(x)}{dx} = 1,在 x < 0 時,『斜率』為 0,然而 x = 0 時『斜率』不存在!這使得我們必須研究一個函數在『每個點』之『鄰域』情況,於是數學步入了『解析的』 Analytic 時代。所謂『解析的』一詞是指『這類函數』在 x = x_0 的『鄰域』,可以用『泰勒級數』來作展開

T(x) = \sum \limits_{n=0}^{\infty} \frac{f^{(n)}(x_0)}{n!} (x-x_0)^{n}

。於是一個『解析函數』在定義域的『每一點』上都是『無窮階可導』的。人們將『無窮階可導』的函數,又稱之為『光滑函數』 smooth function。然而『可導性』卻不同於『連續性』,因此又定義了『連續可導性』︰假使一個函數從『一階到 N 階』的導數都『存在』而且『連續』,我們稱之為 C^{N} 類函數。舉例來說

f(x) = \begin{cases}x^2\sin{(\tfrac{1}{x})} & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0\end{cases}

雖然『一階導數』存在但是在 x = 0 時,並不『連續』,所以它只能屬於 C^{0} 類,而不是屬於 C^{1} 類。

雖然一個『光滑函數』就屬於 C^{\infty} 類,但是它可以不是『解析函數』,比方說

f(x) = \begin{cases}e^{-\frac{1}{1-x^2}} & \mbox{ if } |x| < 1, \\ 0 &\mbox{ otherwise }\end{cases}

是『光滑的』,然而在 x = \pm 1 時無法用『泰勒級數』來作展開,因此不是『解析的』。

縱使人們覺得『連續』與『鄰近』以及『導數』和『光滑』彼此之間有聯繫,由於失去『直觀』的導引,『概念』卻又越來越『複雜』,因此『微積分』也就遠離了一般人的『理解』,彷彿鎖在『解析』與『極限』的『巴別塔』中!更不要說還有一些『很有用』卻是『很奇怪』的函數。舉例來說,『單位階躍』函數,又稱作『黑維塞階躍函數』 Heaviside step function ,可以定義如下

H(x) = \begin{cases} 0, & x < 0 \\ \frac{1}{2}, & x = 0 \\ 1, & x > 0 \end{cases}

,在 x = 0 時是『不連續』的,它可以『解析』為

H(x)=\lim \limits_{k \rightarrow \infty}\frac{1}{2}(1+\tanh kx)=\lim \limits_{k \rightarrow \infty}\frac{1}{1+\mathrm{e}^{-2kx}}

,它的『微分』是 \frac{dH(x)}{dx} = \delta(x),而且這個『狄拉克 \delta(x) 函數』 Dirac Delta function 是這樣定義的

\delta(x) = \begin{cases} +\infty, & x = 0 \\ 0, & x \ne 0 \end{cases}

,滿足

\int_{-\infty}^\infty \delta(x) \, dx = 1

。怕是想『解析』一下都令人頭大,『極限』和『微分』與『積分』能不能『交換次序』,它必須滿足『什麼條件』,假使再加上『無限級數』求和,

\operatorname{III}_T(t) \ \stackrel{\mathrm{def}}{=}\ \sum_{k=-\infty}^{\infty} \delta(t - k T) = \frac{1}{T}\operatorname{III}\left(\frac{t}{T}\right)

,果真是我的天啊的吧!!

Mug_and_Torus_morph

240px-Möbius_strip

220px-Trefoil_knot_arb

250px-Bump2D_illustration

220px-C0_function.svg

f(x) = \begin{cases}x & \mbox{if }x \ge 0, \\ 0 &\mbox{if }x < 0\end{cases}

Rational_sequence_with_2_accumulation_points_svg.svg

220px-Diagonal_argument.svg

220px-X^2sin(x^-1).svg

f(x) = \begin{cases}x^2\sin{(\tfrac{1}{x})} & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0\end{cases}

f'(x) = \begin{cases}-\mathord{\cos(\tfrac{1}{x})} + 2x\sin(\tfrac{1}{x}) & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0.\end{cases}

Mollifier_Illustration.svg

f(x) = \begin{cases}e^{-\frac{1}{1-x^2}} & \mbox{ if } |x| < 1, \\ 0 &\mbox{ otherwise }\end{cases}

Non-analytic_smooth_function

f(x) := \begin{cases}e^{-\frac{1}{x}} & x > 0, \\ 0 & x \leq 0 \end{cases}

250px-StepFunctionExample

Dirac_distribution_CDF.svg

H(x) = \begin{cases} 0, & x < 0 \\ \frac{1}{2}, & x = 0 \\ 1, & x > 0 \end{cases}

325px-Dirac_distribution_PDF.svg
狄拉克 δ 函數
單位脈衝函數

Dirac_function_approximation
\delta_{a}(x) = \frac{1}{a \sqrt{\pi}} e^{- x^2 / a^2}
a \rightarrow 0

220px-Dirac_comb.svg

─── 《【SONIC Π】電路學之補充《四》無窮小算術‧中

 

『直觀概念』之深邃,不可掉以輕心!

比方如何對待『線性非時變系統理論

Linear time-invariant theory

Linear time-invariant theory, commonly known as LTI system theory, comes from applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. It investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time (e.g., an acoustic waveform), but in applications like image processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus, these systems are also called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors.[1]

Overview

The defining properties of any LTI system are linearity and time invariance.

  • Linearity means that the relationship between the input and the output of the system is a linear map: If input \displaystyle x_{1}(t) produces response \displaystyle y_{1}(t), and input \displaystyle x_{2}(t) produces response \displaystyle y_{2}(t), then the scaled and summed input \displaystyle a_{1}x_{1}(t)+a_{2}x_{2}(t) produces the scaled and summed response \displaystyle a_{1}y_{1}(t)+a_{2}y_{2}(t) where \displaystyle a_{1} are real scalars. It follows that this can be extended to an arbitrary number of terms, and so for real numbers \displaystyle c_{1},c_{2},\ldots ,c_{k} ,
Input   \displaystyle \sum _{k}c_{k}\,x_{k}(t) produces output   \displaystyle \sum _{k}c_{k}\,y_{k}(t).
In particular,
Input  \displaystyle \int _{-\infty }^{\infty }c_{\omega }\,x_{\omega }(t)\,\operatorname {d} \omega produces output  \displaystyle \int _{-\infty }^{\infty }c_{\omega }\,y_{\omega }(t)\,\operatorname {d} \omega \,   (Eq.1)
where \displaystyle c_{\omega } and \displaystyle x_{\omega } are scalars and inputs that vary over a continuum indexed by \displaystyle \omega \omega . Thus if an input function can be represented by a continuum of input functions, combined “linearly”, as shown, then the corresponding output function can be represented by the corresponding continuum of output functions, scaled and summed in the same way.
  • Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input \displaystyle x(t) is \displaystyle y(t) , then the output due to input \displaystyle x(t-T) is \displaystyle y(t-T) . Hence, the system is time invariant because the output does not depend on the particular time the input is applied.

The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system’s impulse response. The output of the system is simply the convolution of the input to the system with the system’s impulse response. This method of analysis is often called the time domain point-of-view. The same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, and convolution is defined on sequences.

Relationship between the time domain and thefrequency domain

Equivalently, any LTI system can be characterized in the frequency domain by the system’s transfer function, which is the Laplace transform of the system’s impulse response (or Z transform in the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.

For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform \displaystyle Ae^{st} for some complex amplitude \displaystyle A and complex frequency \displaystyle s s, the output will be some complex constant times the input, say \displaystyle Be^{st} for some new complex amplitude \displaystyle B . The ratio \displaystyle B/A is the transfer function at frequency \displaystyle s .

Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a different amplitude and a different phase, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input.

LTI system theory is good at describing many important systems. Most LTI systems are considered “easy” to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear homogeneous differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.

Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals.

A linear system that is not time-invariant can be solved using other approaches such as the Green function method. The same method must be used when the initial conditions of the problem are not null.

 

的『因果關係』呢?

Causality

 

A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is

\displaystyle h(t)=0\quad \forall t<0,

where \displaystyle h(t) is the impulse response. It is not possible in general to determine causality from the Laplace transform, because the inverse transform is not unique. When a region of convergence is specified, then causality can be determined.

 

是否補之以『因果系統定義』就了解了耶??

Causal system

In control theory, a causal system (also known as a physical or nonanticipative system) is a system where the output depends on past and current inputs but not future inputs—i.e., the output \displaystyle y(t_{0}) depends on only the input \displaystyle x(t) for values of \displaystyle t\leq t_{0} .

The idea that the output of a function at any time depends only on past and present values of input is defined by the property commonly referred to as causality. A system that has some dependence on input values from the future (in addition to possible dependence on past or current input values) is termed a non-causal or acausal system, and a system that depends solely on future input values is an anticausal system. Note that some authors have defined an anticausal system as one that depends solely on future and presentinput values or, more simply, as a system that does not depend on past input values.

Classically, nature or physical reality has been considered to be a causal system. Physics involving special relativity or general relativity require more careful definitions of causality, as described elaborately in Causality (physics).

The causality of systems also plays an important role in digital signal processing, where filters are constructed so that they are causal, sometimes by altering a non-causal formulation to remove the lack of causality so that it is realizable. For more information, see causal filter.

For a causal system, the impulse response of the system must use only the present and past values of the input to determine the output. This requirement is a necessary and sufficient condition for a system to be causal, regardless of linearity. Note that similar rules apply to either discrete or continuous cases. By this definition of requiring no future input values, systems must be causal to process signals in real time.[1]

Mathematical definitions

Definition 1: A system mapping \displaystyle x to \displaystyle y is causal if and only if, for any pair of input signals \displaystyle x_{1}(t) and \displaystyle x_{2}(t) such that

\displaystyle x_{1}(t)=x_{2}(t),\quad \forall \ t<t_{0},

the corresponding outputs satisfy

\displaystyle y_{1}(t)=y_{2}(t),\quad \forall \ t<t_{0}.

Definition 2: Suppose \displaystyle h(t) is the impulse response of any system \displaystyle H described by a linear constant coefficient differential equation. The system \displaystyle H is causal if and only if

\displaystyle h(t)=0,\quad \forall \ t<0

otherwise it is non-causal.

 

且藉『單位階躍函數』

Heaviside step function

The Heaviside step function, using the half-maximum convention

The Heaviside step function, or the unit step function, usually denoted by H or θ (but sometimes u, 1 or 𝟙), is a discontinuous function named after Oliver Heaviside (1850–1925), whose value is zero for negative argument and one for positive argument. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Oliver Heaviside, who developed the operational calculus as a tool in the analysis of telegraphic communications, represented the function as 1.

The simplest definition of the Heaviside function is as the derivative of the ramp function:

\displaystyle H(x):={\frac {d}{dx}}\max\{x,0\}

The Heaviside function can also be defined as the integral of the Dirac delta function: H′ = δ. This is sometimes written as

\displaystyle H(x):=\int _{-\infty }^{x}{\delta (s)}\,ds

although this expansion may not hold (or even make sense) for x = 0, depending on which formalism one uses to give meaning to integrals involving δ. In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See constant random variable.)

In operational calculus, useful answers seldom depend on which value is used for H(0), since H is mostly used as a distribution. However, the choice may have some important consequences in functional analysis and game theory, where more general forms of continuity are considered. Some common choices can be seen below.

 

略作鋪陳也。

首先,一個『線性系統』

【無因則無果】

假設以 \hat{L} 代表『線性算符』,x(t) 是『輸入』, y(t) 是『輸出』 y(t) = \hat{L} \ x(t) 。那麼

\hat{L} \ (x(t) - x(t)) = \hat{L} \ 0 = \hat{L} \ x(t) - \hat{L} \ x(t) = y(t) - y(t) = 0

【因果關係】

再從『單位階躍函數』的定義

\displaystyle H(t)={\begin{cases}0,&t<0\\1,&t \ge 0\end{cases}}

可得

\displaystyle H(-t)={\begin{cases}1,&t<0\\0,&t \ge 0\end{cases}}

因此 x(t) \cdot H(-(t-t_0)) 可表示

\displaystyle x(t) = {\begin{cases}x(t),&t<t_0\\0,&t \ge t_0\end{cases}}

也。

如果 x_1 (t), \ x_2 (t) 是『線性系統』的任意兩個『輸入』, y_1(t), \ y_2 (t) 是對應『輸出』,滿足

x_1 (t) \cdot H(-(t-t_0)) = x_2 (t) \cdot H(-(t-t_0))

t < t_0 時,

\hat{L} \ \left[ x_1 (t) \cdot H(-(t-t_0)) - x_2 (t) \cdot H(-(t-t_0)) \right]

= \hat{L} \ \left[ x_1 (t) \cdot 1 - x_2 (t) \cdot 1 \right]

= y_1 (t) - y_2(t) = 0

t \ge t_0 時,

\hat{L} \ \left[ x_1 (t) \cdot H(-(t-t_0)) - x_2 (t) \cdot H(-(t-t_0)) \right]

= \hat{L} \ \left[ x_1 (t) \cdot 0 - x_2 (t) \cdot 0 \right]

=\hat{L} \ (x_1 (t) \cdot 0) - \hat{L} \ (x_2 (t) \cdot 0)

= 0 - 0 = 0

 

試問這樣推理正確乎☻

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧A

綻放的量天尺花,攝於夏威夷縣Kona

 

天有造父變星作標尺,地生量天尺花耐乾旱。

人創符號序無窮論次第︰

歷史點滴品味嚐︰

基本尺度細思量︰

\cdots \prec \ln(\ln(x)) \prec x^{\frac{1}{n}} \prec x \prec x^n \prec e^x \prec e^{e^x} \prec \cdots

此處 n 是大於一的自然數。

指數對數總其綱☆

倘知階乘座何處??

Rate of growth and approximations for large n

 Plot of the natural logarithm of the factorial

As n grows, the factorial n! increases faster than all polynomials and exponential functions (but slower than double exponential functions) in n.

Most approximations for n! are based on approximating its natural logarithm

{\displaystyle \ln n!=\sum _{x=1}^{n}\ln x.}

The graph of the function f(n) = ln n! is shown in the figure on the right. It looks approximately linear for all reasonable values of n, but this intuition is false. We get one of the simplest approximations for ln n! by bounding the sum with an integral from above and below as follows:

{\displaystyle \int _{1}^{n}\ln x\,dx\leq \sum _{x=1}^{n}\ln x\leq \int _{0}^{n}\ln(x+1)\,dx}

which gives us the estimate

{\displaystyle n\ln \left({\frac {n}{e}}\right)+1\leq \ln n!\leq (n+1)\ln \left({\frac {n+1}{e}}\right)+1.}

Hence ln ⁡ {\displaystyle \ln n!\sim n\ln n} (see Big O notation). This result plays a key role in the analysis of the computational complexity of sorting algorithms (see comparison sort). From the bounds on ln n! deduced above we get that

e\left({\frac {n}{e}}\right)^{n}\leq n!\leq e\left({\frac {n+1}{e}}\right)^{n+1}.

It is sometimes practical to use weaker but simpler estimates. Using the above formula it is easily shown that for all n we have  (n/3)^{n}<n!, and for all n ≥ 6 we have  n!<(n/2)^{n}.

For large n we get a better estimate for the number n! using Stirling’s approximation:

  n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.

This in fact comes from an asymptotic series for the logarithm, and n factorial lies between this and the next approximation:

  {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.

Another approximation for ln n! is given by Srinivasa Ramanujan (Ramanujan 1988)

{\displaystyle \ln n!\approx n\ln n-n+{\frac {\ln(n(1+4n(1+2n)))}{6}}+{\frac {\ln(\pi )}{2}}}

or

n!\approx {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}[1+1/(2n)+1/(8n^{2})]^{1/6}.

Both this and  {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}} give a relative error on the order of 1/n3, but Ramanujan’s is about four times more accurate. However, if we use two correction terms (as in Ramanujan’s approximation) the relative error will be of order 1/n5:

n!\approx {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\exp \left({{\frac {1}{12n}}-{\frac {1}{360n^{3}}}}\right)

【註︰ x^x = e^{x \ln(x)}  \prec e^{x^2}}

 

漸近之理已顯揚!!

一圖

 

一表

Orders of common functions

Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a positive constant and n increases without bound. The slower-growing functions are generally listed first.

NOTATION NAME EXAMPLE
  O(1) constant Determining if a binary number is even or odd; Calculating  (-1)^{n}; Using a constant-size lookup table
  O(\log \log n) double logarithmic Number of comparisons spent finding an item using interpolation search in a sorted array of uniformly distributed values
  O(\log n) logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap
  {\displaystyle O((\log n)^{c})}
{\displaystyle \scriptstyle c>1}
polylogarithmic Matrix chain ordering can be solved in polylogarithmic time on a Parallel Random Access Machine.
  O(n^{c})
{\displaystyle \scriptstyle 0<c<1}
fractional power Searching in a kd-tree
  O(n) linear Finding an item in an unsorted list or in an unsorted array; adding two n-bit integers by ripple carry
  {\displaystyle O(n\log ^{*}n)} n log-star n Performing triangulation of a simple polygon using Seidel’s algorithm, or the union–find algorithm. Note that\log ^{*}(n)={\begin{cases}0,&{\text{if }}n\leq 1\\1+\log ^{*}(\log n),&{\text{if }}n>1\end{cases}}
{\displaystyle O(n\log n)=O(\log n!)} linearithmic, loglinear, or quasilinear Performing a fast Fourier transform; Fastest possible comparison sort; heapsort and merge sort
  O(n^{2}) quadratic Multiplying two n-digit numbers by a simple algorithm; simple sorting algorithms, such as bubble sort, selection sort and insertion sort; bound on some usually faster sorting algorithms such as quicksort, Shellsort, and tree sort
  O(n^{c}) polynomial or algebraic Tree-adjoining grammar parsing; maximum matching for bipartite graphs; finding the determinant with LU decomposition
{\displaystyle L_{n}[\alpha ,c]=e^{(c+o(1))(\ln n)^{\alpha }(\ln \ln n)^{1-\alpha }}}
{\displaystyle \scriptstyle 0<\alpha <1}
L-notation or sub-exponential Factoring a number using the quadratic sieve or number field sieve
O(c^{n})
{\displaystyle \scriptstyle c>1}
exponential Finding the (exact) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search
O(n!) factorial Solving the travelling salesman problem via brute-force search; generating all unrestricted permutations of a poset; finding the determinant with Laplace expansion; enumerating all partitions of a set

The statement {\displaystyle f(n)=O(n!)} is sometimes weakened to f(n)=O\left(n^{n}\right) to derive simpler formulas for asymptotic complexity. For any  k>0 and  c>0 O(n^{c}(\log n)^{k}) is a subset of  O(n^{c+\varepsilon }) for any  \varepsilon >0, so may be considered as a polynomial with some bigger order.

 

義自彰!!??

─── 《時間序列︰生成函數‧漸近展開︰無限大等級 IV

 

因為拉普拉斯變換在理論和實務上的重要性,故闢篇章予以特寫。

引用『無限大等級』作開始,關聯其與

\displaystyle F(s) = \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt

『存在性』之緊密也。

始於編表人之說︰

鎔鑄心思、權衡輕重、如數家珍。

觀其門道也。

Table of selected Laplace transforms

 

The following table provides Laplace transforms for many common functions of a single variable.[23][24] For definitions and explanations, see the Explanatory Notes at the end of the table.

Because the Laplace transform is a linear operator,

  • The Laplace transform of a sum is the sum of Laplace transforms of each term.
\displaystyle {\mathcal {L}}\{f(t)+g(t)\}={\mathcal {L}}\{f(t)\}+{\mathcal {L}}\{g(t)\}
  • The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function.
\displaystyle {\mathcal {L}}\{af(t)\}=a{\mathcal {L}}\{f(t)\}

Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly.

The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t).

The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems.

Function Time domain

\displaystyle F(T)={\mathcal {L}}^{-1}\{F(S)\}

Laplace s-domain

\displaystyle F(S)={\mathcal {L}}\{F(T)\}

Region of convergence Reference
unit impulse \displaystyle \delta (t) \displaystyle 1 all s inspection
delayed impulse \displaystyle \delta (t-\tau ) \displaystyle e^{-\tau s}   time shift of
unit impulse
unit step \displaystyle u(t) \displaystyle {1 \over s} Re(s) > 0 integrate unit impulse
delayed unit step \displaystyle u(t-\tau ) \displaystyle {\frac {1}{s}}e^{-\tau s} Re(s) > 0 time shift of
unit step
ramp \displaystyle t\cdot u(t) \displaystyle {\frac {1}{s^{2}}} Re(s) > 0 integrate unit
impulse twice
nth power
(for integer n)
\displaystyle t^{n}\cdot u(t) \displaystyle {n! \over s^{n+1}} Re(s) > 0
(n > −1)
Integrate unit
step n times
qth power
(for complex q)
\displaystyle t^{q}\cdot u(t) \displaystyle {\Gamma (q+1) \over s^{q+1}} Re(s) > 0
Re(q) > −1
[25][26]
nth root \displaystyle {\sqrt[{n}]{t}}\cdot u(t) \displaystyle {1 \over s^{{\frac {1}{n}}+1}}\Gamma \left({\frac {1}{n}}+1\right) Re(s) > 0 Set q = 1/n above.
nth power with frequency shift \displaystyle t^{n}e^{-\alpha t}\cdot u(t) \displaystyle {\frac {n!}{(s+\alpha )^{n+1}}} Re(s) > −α Integrate unit step,
apply frequency shift
delayed nth power
with frequency shift
\displaystyle (t-\tau )^{n}e^{-\alpha (t-\tau )}\cdot u(t-\tau ) \displaystyle {\frac {n!\cdot e^{-\tau s}}{(s+\alpha )^{n+1}}} Re(s) > −α Integrate unit step,
apply frequency shift,
apply time shift
exponential decay \displaystyle e^{-\alpha t}\cdot u(t) \displaystyle {1 \over s+\alpha } Re(s) > −α Frequency shift of
unit step
two-sided exponential decay
(only for bilateral transform)
\displaystyle e^{-\alpha |t|} \displaystyle {2\alpha \over \alpha ^{2}-s^{2}} α < Re(s) < α Frequency shift of
unit step
exponential approach \displaystyle (1-e^{-\alpha t})\cdot u(t) \displaystyle {\frac {\alpha }{s(s+\alpha )}} Re(s) > 0 Unit step minus
exponential decay
sine \displaystyle \sin(\omega t)\cdot u(t) \displaystyle {\omega \over s^{2}+\omega ^{2}} Re(s) > 0 Bracewell 1978, p. 227
cosine \displaystyle \cos(\omega t)\cdot u(t) \displaystyle {s \over s^{2}+\omega ^{2}} Re(s) > 0 Bracewell 1978, p. 227
hyperbolic sine \displaystyle \sinh(\alpha t)\cdot u(t) \displaystyle {\alpha \over s^{2}-\alpha ^{2}} Re(s) > |α| Williams 1973, p. 88
hyperbolic cosine \displaystyle \cosh(\alpha t)\cdot u(t) \displaystyle {s \over s^{2}-\alpha ^{2}} Re(s) > |α| Williams 1973, p. 88
exponentially decaying
sine wave
\displaystyle e^{-\alpha t}\sin(\omega t)\cdot u(t) \displaystyle {\omega \over (s+\alpha )^{2}+\omega ^{2}} Re(s) > −α Bracewell 1978, p. 227
exponentially decaying
cosine wave
\displaystyle e^{-\alpha t}\cos(\omega t)\cdot u(t) \displaystyle {s+\alpha \over (s+\alpha )^{2}+\omega ^{2}} Re(s) > −α Bracewell 1978, p. 227
natural logarithm \displaystyle \ln(t)\cdot u(t) \displaystyle -{1 \over s}\,\left[\ln(s)+\gamma \right] Re(s) > 0 Williams 1973, p. 88
Bessel function
of the first kind,
of order n
\displaystyle J_{n}(\omega t)\cdot u(t) \displaystyle {\frac {\left({\sqrt {s^{2}+\omega ^{2}}}-s\right)^{n}}{\omega ^{n}{\sqrt {s^{2}+\omega ^{2}}}}} Re(s) > 0
(n > −1)
Williams 1973, p. 89
Error function \displaystyle \operatorname {erf} (t)\cdot u(t) \displaystyle {\frac {1}{s}}e^{(1/4)s^{2}}\left(1-\operatorname {erf} {\frac {s}{2}}\right) Re(s) > 0 Williams 1973, p. 89
Explanatory notes:

  • t, a real number, typically represents time,
    although it can represent any independent dimension.
  • s is the complex frequency domain parameter, and Re(s) is its real part.
  • α, β, τ, and ω are real numbers.
  • n is an integer.

 

所以思接『泰勒級數』

Taylor series

Definition

The Taylor series of a real or complex-valued function f (x) that is infinitely differentiable at a real or complex number a is the power series

\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+{\frac {f'''(a)}{3!}}(x-a)^{3}+\cdots ,

which can be written in the more compact sigma notation as

\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n},

where n! denotes the factorial of n and f(n)(a) denotes the nth derivative of f evaluated at the point a. The derivative of order zero of f is defined to be f itself and (xa)0 and 0! are both defined to be 1. When a = 0, the series is also called a Maclaurin series.[1]

 

念及『解析函數』

Analytic functions

The function e(−1/x2) is not analytic at x = 0: the Taylor series is identically 0, although the function is not.

 

If f (x) is given by a convergent power series in an open disc (or interval in the real line) centered at b in the complex plane, it is said to be analytic in this disc. Thus for x in this disc, f is given by a convergent power series

\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-b)^{n}.

Differentiating by x the above formula n times, then setting x = b gives:

\displaystyle {\frac {f^{(n)}(b)}{n!}}=a_{n}

and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disc centered at b if and only if its Taylor series converges to the value of the function at each point of the disc.

If f (x) is equal to its Taylor series for all x in the complex plane, it is called entire. The polynomials, exponential function ex, and the trigonometric functions sine and cosine, are examples of entire functions. Examples of functions that are not entire include the square root, the logarithm, the trigonometric function tangent, and its inverse, arctan. For these functions the Taylor series do not converge if x is far from b. That is, the Taylor series diverges at x if the distance between x and b is larger than the radius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point.

Uses of the Taylor series for analytic functions include:

  1. The partial sums (the Taylor polynomials) of the series can be used as approximations of the function. These approximations are good if sufficiently many terms are included.
  2. Differentiation and integration of power series can be performed term by term and is hence particularly easy.
  3. An analytic function is uniquely extended to a holomorphic function on an open disk in the complex plane. This makes the machinery of complex analysis available.
  4. The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm).
  5. Algebraic operations can be done readily on the power series representation; for instance, Euler’s formula follows from Taylor series expansions for trigonometric and exponential functions. This result is of fundamental importance in such fields as harmonic analysis.
  6. Approximations using the first few terms of a Taylor series can make otherwise unsolvable problems possible for a restricted domain; this approach is often used in physics.

 

咀嚼旨趣也。

假設 \displaystyle f(t)=\sum _{n=0}^{\infty }  \frac {f^{(n)}(0)}{n!} t^{n}

\displaystyle \because {\mathcal {L}}\{ t^{n}\cdot u(t) \} = {n! \over s^{n+1}}

\displaystyle \therefore {\mathcal {L}}\{ f(t) \cdot u(t) \}

= \sum _{n=0}^{\infty }  \frac {f^{(n)}(0)}{n!} {\mathcal {L}}\{ t^{n}\cdot u(t) \}

=\sum _{n=0}^{\infty }  \frac {f^{(n)}(0)}{s^{n+1}}

 

權充

Relation to power series

The Laplace transform can be viewed as a continuous analogue of a power series. If a(n) is a discrete function of a positive integer n, then the power series associated to a(n) is the series

\displaystyle \sum _{n=0}^{\infty }a(n)x^{n}

where x is a real variable (see Z transform). Replacing summation over n with integration over t, a continuous version of the power series becomes

\displaystyle \int _{0}^{\infty }f(t)x^{t}\,dt

where the discrete function a(n) is replaced by the continuous one f(t).

Changing the base of the power from x to e gives

\displaystyle \int _{0}^{\infty }f(t)\left(e^{\ln {x}}\right)^{t}\,dt

For this to converge for, say, all bounded functions f, it is necessary to require that ln x < 0. Making the substitution s = ln x gives just the Laplace transform:

\displaystyle \int _{0}^{\infty }f(t)e^{-st}\,dt

In other words, the Laplace transform is a continuous analog of a power series in which the discrete parameter n is replaced by the continuous parameter t, and x is replaced by es.

 

Region of convergence

If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit

\displaystyle \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt

exists.

The Laplace transform converges absolutely if the integral

\displaystyle \int _{0}^{\infty }\left|f(t)e^{-st}\right|\,dt

exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.

The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t).[17] Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b.[18] The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini’s theorem and Morera’s theorem.

Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a.

In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral

\displaystyle F(s)=(s-s_{0})\int _{0}^{\infty }e^{-(s-s_{0})t}\beta (t)\,dt,\quad \beta (u)=\int _{0}^{u}e^{-s_{0}t}f(t)\,dt.

That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.

There are several Paley–Wiener theorems concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence.

In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) ≥ 0. As a result, LTI systems are stable provided the poles of the Laplace transform of the impulse response function have negative real part.

This ROC is used in knowing about the causality and stability of a system.

 

文本註釋爾。