STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧B上

在《吃一節 TCPIP!!中》一文中我們談到了『拓撲學』 Topology 一詞源自希臘文『地點之研究』,始於歐拉柯尼斯堡的七橋問題。這門數學探討『連通性』 connectedness 、『連續性』 continuity 、以及『邊界』 boundary。它不用東西的『形狀』來作分類,而是分析在那個東西裡所有連通的點,各個連續的區域,和有哪些分別內外的邊界。假使從『拓撲學』的觀點來談『函數』的『連續性』,那麼 |f(x) - f(x_0)| < \varepsilon 就是 f(x_0) 的『鄰域』 neighborhood,而 |x-x_0| < \delta 也就是 x_0 的『鄰域』 。所以函數上『一點』的連續性是說『這個點』的所有『指定鄰域』,都有一個『實數區間』── 鄰域的另一種說法 ── 與之『對應』,『此函數』將『此區間』映射到那個『指定鄰域』裡。

然而一個函數在『某個點』的『連續性』,並不能夠『確保』在『那個點』的『斜率存在』 ── 或說『可微分性』,比方說

f(x) = \begin{cases}x & \mbox{if }x \ge 0, \\ 0 &\mbox{if }x < 0\end{cases}

,當 x > 0 時,『斜率』是 f^{'}(x) = \frac{df(x)}{dx} = 1,在 x < 0 時,『斜率』為 0,然而 x = 0 時『斜率』不存在!這使得我們必須研究一個函數在『每個點』之『鄰域』情況,於是數學步入了『解析的』 Analytic 時代。所謂『解析的』一詞是指『這類函數』在 x = x_0 的『鄰域』,可以用『泰勒級數』來作展開

T(x) = \sum \limits_{n=0}^{\infty} \frac{f^{(n)}(x_0)}{n!} (x-x_0)^{n}

。於是一個『解析函數』在定義域的『每一點』上都是『無窮階可導』的。人們將『無窮階可導』的函數,又稱之為『光滑函數』 smooth function。然而『可導性』卻不同於『連續性』,因此又定義了『連續可導性』︰假使一個函數從『一階到 N 階』的導數都『存在』而且『連續』,我們稱之為 C^{N} 類函數。舉例來說

f(x) = \begin{cases}x^2\sin{(\tfrac{1}{x})} & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0\end{cases}

雖然『一階導數』存在但是在 x = 0 時,並不『連續』,所以它只能屬於 C^{0} 類,而不是屬於 C^{1} 類。

雖然一個『光滑函數』就屬於 C^{\infty} 類,但是它可以不是『解析函數』,比方說

f(x) = \begin{cases}e^{-\frac{1}{1-x^2}} & \mbox{ if } |x| < 1, \\ 0 &\mbox{ otherwise }\end{cases}

是『光滑的』,然而在 x = \pm 1 時無法用『泰勒級數』來作展開,因此不是『解析的』。

縱使人們覺得『連續』與『鄰近』以及『導數』和『光滑』彼此之間有聯繫,由於失去『直觀』的導引,『概念』卻又越來越『複雜』,因此『微積分』也就遠離了一般人的『理解』,彷彿鎖在『解析』與『極限』的『巴別塔』中!更不要說還有一些『很有用』卻是『很奇怪』的函數。舉例來說,『單位階躍』函數,又稱作『黑維塞階躍函數』 Heaviside step function ,可以定義如下

H(x) = \begin{cases} 0, & x < 0 \\ \frac{1}{2}, & x = 0 \\ 1, & x > 0 \end{cases}

,在 x = 0 時是『不連續』的,它可以『解析』為

H(x)=\lim \limits_{k \rightarrow \infty}\frac{1}{2}(1+\tanh kx)=\lim \limits_{k \rightarrow \infty}\frac{1}{1+\mathrm{e}^{-2kx}}

,它的『微分』是 \frac{dH(x)}{dx} = \delta(x),而且這個『狄拉克 \delta(x) 函數』 Dirac Delta function 是這樣定義的

\delta(x) = \begin{cases} +\infty, & x = 0 \\ 0, & x \ne 0 \end{cases}

,滿足

\int_{-\infty}^\infty \delta(x) \, dx = 1

。怕是想『解析』一下都令人頭大,『極限』和『微分』與『積分』能不能『交換次序』,它必須滿足『什麼條件』,假使再加上『無限級數』求和,

\operatorname{III}_T(t) \ \stackrel{\mathrm{def}}{=}\ \sum_{k=-\infty}^{\infty} \delta(t - k T) = \frac{1}{T}\operatorname{III}\left(\frac{t}{T}\right)

,果真是我的天啊的吧!!

Mug_and_Torus_morph

240px-Möbius_strip

220px-Trefoil_knot_arb

250px-Bump2D_illustration

220px-C0_function.svg

f(x) = \begin{cases}x & \mbox{if }x \ge 0, \\ 0 &\mbox{if }x < 0\end{cases}

Rational_sequence_with_2_accumulation_points_svg.svg

220px-Diagonal_argument.svg

220px-X^2sin(x^-1).svg

f(x) = \begin{cases}x^2\sin{(\tfrac{1}{x})} & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0\end{cases}

f'(x) = \begin{cases}-\mathord{\cos(\tfrac{1}{x})} + 2x\sin(\tfrac{1}{x}) & \mbox{if }x \neq 0, \\ 0 &\mbox{if }x = 0.\end{cases}

Mollifier_Illustration.svg

f(x) = \begin{cases}e^{-\frac{1}{1-x^2}} & \mbox{ if } |x| < 1, \\ 0 &\mbox{ otherwise }\end{cases}

Non-analytic_smooth_function

f(x) := \begin{cases}e^{-\frac{1}{x}} & x > 0, \\ 0 & x \leq 0 \end{cases}

250px-StepFunctionExample

Dirac_distribution_CDF.svg

H(x) = \begin{cases} 0, & x < 0 \\ \frac{1}{2}, & x = 0 \\ 1, & x > 0 \end{cases}

325px-Dirac_distribution_PDF.svg
狄拉克 δ 函數
單位脈衝函數

Dirac_function_approximation
\delta_{a}(x) = \frac{1}{a \sqrt{\pi}} e^{- x^2 / a^2}
a \rightarrow 0

220px-Dirac_comb.svg

─── 《【SONIC Π】電路學之補充《四》無窮小算術‧中

 

『直觀概念』之深邃,不可掉以輕心!

比方如何對待『線性非時變系統理論

Linear time-invariant theory

Linear time-invariant theory, commonly known as LTI system theory, comes from applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. It investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time (e.g., an acoustic waveform), but in applications like image processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus, these systems are also called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors.[1]

Overview

The defining properties of any LTI system are linearity and time invariance.

  • Linearity means that the relationship between the input and the output of the system is a linear map: If input \displaystyle x_{1}(t) produces response \displaystyle y_{1}(t), and input \displaystyle x_{2}(t) produces response \displaystyle y_{2}(t), then the scaled and summed input \displaystyle a_{1}x_{1}(t)+a_{2}x_{2}(t) produces the scaled and summed response \displaystyle a_{1}y_{1}(t)+a_{2}y_{2}(t) where \displaystyle a_{1} are real scalars. It follows that this can be extended to an arbitrary number of terms, and so for real numbers \displaystyle c_{1},c_{2},\ldots ,c_{k} ,
Input   \displaystyle \sum _{k}c_{k}\,x_{k}(t) produces output   \displaystyle \sum _{k}c_{k}\,y_{k}(t).
In particular,
Input  \displaystyle \int _{-\infty }^{\infty }c_{\omega }\,x_{\omega }(t)\,\operatorname {d} \omega produces output  \displaystyle \int _{-\infty }^{\infty }c_{\omega }\,y_{\omega }(t)\,\operatorname {d} \omega \,   (Eq.1)
where \displaystyle c_{\omega } and \displaystyle x_{\omega } are scalars and inputs that vary over a continuum indexed by \displaystyle \omega \omega . Thus if an input function can be represented by a continuum of input functions, combined “linearly”, as shown, then the corresponding output function can be represented by the corresponding continuum of output functions, scaled and summed in the same way.
  • Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input \displaystyle x(t) is \displaystyle y(t) , then the output due to input \displaystyle x(t-T) is \displaystyle y(t-T) . Hence, the system is time invariant because the output does not depend on the particular time the input is applied.

The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system’s impulse response. The output of the system is simply the convolution of the input to the system with the system’s impulse response. This method of analysis is often called the time domain point-of-view. The same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, and convolution is defined on sequences.

Relationship between the time domain and thefrequency domain

Equivalently, any LTI system can be characterized in the frequency domain by the system’s transfer function, which is the Laplace transform of the system’s impulse response (or Z transform in the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.

For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform \displaystyle Ae^{st} for some complex amplitude \displaystyle A and complex frequency \displaystyle s s, the output will be some complex constant times the input, say \displaystyle Be^{st} for some new complex amplitude \displaystyle B . The ratio \displaystyle B/A is the transfer function at frequency \displaystyle s .

Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a different amplitude and a different phase, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input.

LTI system theory is good at describing many important systems. Most LTI systems are considered “easy” to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear homogeneous differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.

Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals.

A linear system that is not time-invariant can be solved using other approaches such as the Green function method. The same method must be used when the initial conditions of the problem are not null.

 

的『因果關係』呢?

Causality

 

A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is

\displaystyle h(t)=0\quad \forall t<0,

where \displaystyle h(t) is the impulse response. It is not possible in general to determine causality from the Laplace transform, because the inverse transform is not unique. When a region of convergence is specified, then causality can be determined.

 

是否補之以『因果系統定義』就了解了耶??

Causal system

In control theory, a causal system (also known as a physical or nonanticipative system) is a system where the output depends on past and current inputs but not future inputs—i.e., the output \displaystyle y(t_{0}) depends on only the input \displaystyle x(t) for values of \displaystyle t\leq t_{0} .

The idea that the output of a function at any time depends only on past and present values of input is defined by the property commonly referred to as causality. A system that has some dependence on input values from the future (in addition to possible dependence on past or current input values) is termed a non-causal or acausal system, and a system that depends solely on future input values is an anticausal system. Note that some authors have defined an anticausal system as one that depends solely on future and presentinput values or, more simply, as a system that does not depend on past input values.

Classically, nature or physical reality has been considered to be a causal system. Physics involving special relativity or general relativity require more careful definitions of causality, as described elaborately in Causality (physics).

The causality of systems also plays an important role in digital signal processing, where filters are constructed so that they are causal, sometimes by altering a non-causal formulation to remove the lack of causality so that it is realizable. For more information, see causal filter.

For a causal system, the impulse response of the system must use only the present and past values of the input to determine the output. This requirement is a necessary and sufficient condition for a system to be causal, regardless of linearity. Note that similar rules apply to either discrete or continuous cases. By this definition of requiring no future input values, systems must be causal to process signals in real time.[1]

Mathematical definitions

Definition 1: A system mapping \displaystyle x to \displaystyle y is causal if and only if, for any pair of input signals \displaystyle x_{1}(t) and \displaystyle x_{2}(t) such that

\displaystyle x_{1}(t)=x_{2}(t),\quad \forall \ t<t_{0},

the corresponding outputs satisfy

\displaystyle y_{1}(t)=y_{2}(t),\quad \forall \ t<t_{0}.

Definition 2: Suppose \displaystyle h(t) is the impulse response of any system \displaystyle H described by a linear constant coefficient differential equation. The system \displaystyle H is causal if and only if

\displaystyle h(t)=0,\quad \forall \ t<0

otherwise it is non-causal.

 

且藉『單位階躍函數』

Heaviside step function

The Heaviside step function, using the half-maximum convention

The Heaviside step function, or the unit step function, usually denoted by H or θ (but sometimes u, 1 or 𝟙), is a discontinuous function named after Oliver Heaviside (1850–1925), whose value is zero for negative argument and one for positive argument. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Oliver Heaviside, who developed the operational calculus as a tool in the analysis of telegraphic communications, represented the function as 1.

The simplest definition of the Heaviside function is as the derivative of the ramp function:

\displaystyle H(x):={\frac {d}{dx}}\max\{x,0\}

The Heaviside function can also be defined as the integral of the Dirac delta function: H′ = δ. This is sometimes written as

\displaystyle H(x):=\int _{-\infty }^{x}{\delta (s)}\,ds

although this expansion may not hold (or even make sense) for x = 0, depending on which formalism one uses to give meaning to integrals involving δ. In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See constant random variable.)

In operational calculus, useful answers seldom depend on which value is used for H(0), since H is mostly used as a distribution. However, the choice may have some important consequences in functional analysis and game theory, where more general forms of continuity are considered. Some common choices can be seen below.

 

略作鋪陳也。

首先,一個『線性系統』

【無因則無果】

假設以 \hat{L} 代表『線性算符』,x(t) 是『輸入』, y(t) 是『輸出』 y(t) = \hat{L} \ x(t) 。那麼

\hat{L} \ (x(t) - x(t)) = \hat{L} \ 0 = \hat{L} \ x(t) - \hat{L} \ x(t) = y(t) - y(t) = 0

【因果關係】

再從『單位階躍函數』的定義

\displaystyle H(t)={\begin{cases}0,&t<0\\1,&t \ge 0\end{cases}}

可得

\displaystyle H(-t)={\begin{cases}1,&t<0\\0,&t \ge 0\end{cases}}

因此 x(t) \cdot H(-(t-t_0)) 可表示

\displaystyle x(t) = {\begin{cases}x(t),&t<t_0\\0,&t \ge t_0\end{cases}}

也。

如果 x_1 (t), \ x_2 (t) 是『線性系統』的任意兩個『輸入』, y_1(t), \ y_2 (t) 是對應『輸出』,滿足

x_1 (t) \cdot H(-(t-t_0)) = x_2 (t) \cdot H(-(t-t_0))

t < t_0 時,

\hat{L} \ \left[ x_1 (t) \cdot H(-(t-t_0)) - x_2 (t) \cdot H(-(t-t_0)) \right]

= \hat{L} \ \left[ x_1 (t) \cdot 1 - x_2 (t) \cdot 1 \right]

= y_1 (t) - y_2(t) = 0

t \ge t_0 時,

\hat{L} \ \left[ x_1 (t) \cdot H(-(t-t_0)) - x_2 (t) \cdot H(-(t-t_0)) \right]

= \hat{L} \ \left[ x_1 (t) \cdot 0 - x_2 (t) \cdot 0 \right]

=\hat{L} \ (x_1 (t) \cdot 0) - \hat{L} \ (x_2 (t) \cdot 0)

= 0 - 0 = 0

 

試問這樣推理正確乎☻