STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧D

借由觀察

Laplace Transform

Properties

First Derivative

The first derivative property of the Laplace Transform states

To prove this we start with the definition of the Laplace Transform and integrate by parts

The first term in the brackets goes to zero (as long as f(t) doesn’t grow faster than an exponential which was a condition for existence of the transform).  In the next term, the exponential goes to one.  The last term is simply the definition of the Laplace Transform multiplied by s.  So the theorem is proved.

There are two significant things to note about this property:

  • We have taken a derivative in the time domain, and turned it into an algebraic equation in the Laplace domain.  This means that we can take differential equations in time, and turn them into algebraic equations in the Laplace domain.  We can solve the algebraic equations, and then convert back into the time domain (this is called the Inverse Laplace Transform, and is described later).
  • The initial conditions are taken at t=0.  This means that we only need to know the initial conditions before our input starts.  This is often much easier than finding them at t=0+.

Second Derivative

Similarly for the second derivative we can show:

where

Nth order Derivative

For the nth derivative:

or

where

Key Concept: The differentiation property of the Laplace Transform

We will use the differentiation property widely.  It is repeated below (for first, second and nth order derivatives)



 

之『微分性質』裡面有

\frac{d^n}{dt^n} f(0^{-})

『初始項』,如果它們全部為『零』,就是所謂的『零態』 Zero state 了。無怪乎!也有人將之稱作『初始靜止』 Initial rest 條件。它所對應的就是 Zero State Response︰

Zero state response and zero input response in integrator and differentiator circuits

One example of zero state response being used is in integrator and differentiator circuits. By examining a simple integrator circuit it can be demonstrated that when a function is put into a linear time-invariant (LTI) system, an output can be characterized by asuperposition or sum of the Zero Input Response and the zero state response.

A system can be represented as

\displaystyle f(t) System Input Output.JPG \displaystyle y(t)=y(t_{0})+\int _{t_{0}}^{t}f(\tau )d\tau

with the input \displaystyle f(t). on the left and the output \displaystyle y(t). on the right.

The output \displaystyle y(t). can be separated into a zero input and a zero state solution with

\displaystyle y(t)=\underbrace {y(t_{0})} _{Zero-input\ response}+\underbrace {\int _{t_{0}}^{t}f(\tau )d\tau } _{Zero-state\ response}.

The contributions of \displaystyle y(t_{0}) and \displaystyle f(t) to output \displaystyle y(t) are additive and each contribution \displaystyle y(t_{0}) and \displaystyle \int _{t_{0}}^{t}f(\tau )d\tau vanishes with vanishing \displaystyle y(t_{0}) and \displaystyle f(t).

This behavior constitutes a linear system. A linear system has an output that is a sum of distinct zero-input and zero-state components, each varying linearly, with the initial state of the system and the input of the system respectively.

The zero input response and zero state response are independent of each other and therefore each component can be computed independently of the other.

 

反之 Zero Input Response ,顧名思義是指沒有『外部輸入』時,

\frac{d^n}{dt^n} f(0^{-})

自生的響應也。

此時再對照拉普拉斯變換的『積分性質』

Integration

The integration theorem states that

We prove it by starting by integration by parts


The first term in the brackets goes to zero if f(t) grows more slowly than an exponential (one of our requirements for existence of the Laplace Transform), and the second term goes to zero because the limits on the integral are equal.  So the theorem is proven

Example: Find Laplace Transform of Step and Ramp using Integration Property

Given that the Laplace Transform of the impulse δ(t) is Δ(s)=1, find the Laplace Transform of the step and ramp.

Solution:
We know that

so that

Likewise:

 

當更能體會『微積分基本定理』耶?

那麼又該怎麼理解『拉普拉斯變換』之『前 0^{-} 、中 0 、後 0^{+} 』的呢??

Initial Value Theorem

The initial value theorem states

To show this, we first start with the Derivative Rule:

We then invoke the definition of the Laplace Transform, and split the integral into two parts:

We take the limit as s→∞:

Several simplifications are in order.  In the left hand expression, we can take the second term out of the limit, since it doesn’t depend on ‘s.’  In the right hand expression, we can take the first term out of the limit for the same reason, and if we substitute infinity for ‘s’ in the second term, the exponential term goes to zero:

The two f(0) terms cancel each other, and we are left with the Initial Value Theorem

This theorem only works if F(s) is a strictly proper fraction in which the numerator polynomial is of lower order then the denominator polynomial. In other words is will work for F(s)=1/(s+1) but not F(s)=s/(s+1).

………

Final Value Theorem

The final value theorem states that if a final value of a function exists that

However, we can only use the final value if the value exists (function like sine, cosine and the ramp function don’t have final values).  To prove the final value theorem, we start as we did for the initial value theorem, with the Laplace Transform of the derivative,

We let s→0,

As s→0 the exponential term disappears from the integral.  Also, we can take f(0-) out of the limit (since it doesn’t depend on s)

We can evaluate the integral

Neither term on the left depends on s, so we can remove the limit and simplify, resulting in the final value theorem

Examples of functions for which this theorem can’t be used are increasing exponentials (like eat where a is a positive number) that go to infinity as t increases, and oscillating functions like sine and cosine that don’t have a final value..

© Copyright 2005 to 2015 Erik Cheever    This page may be freely used for educational purposes.

 

此事最好先知道『狄拉克 δ 函數』之來歷呦!!

Dirac delta function

The Dirac delta function as the limit (in the sense of distributions) of the sequence of zero-centered normal distributions \displaystyle \delta _{a}(x)={\frac {1}{\left|a\right|{\sqrt {\pi }}}}\mathrm {e} ^{-(x/a)^{2}} as \displaystyle a\rightarrow 0 .

In mathematics, the Dirac delta function (δ function) is a generalized function or distribution introduced by the physicist Paul Dirac. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one.[1][2][3] As there is no function that has these properties, the computations made by the theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. Thus, the Dirac delta function is a linear functional that maps every function to its value at zero.[4][5] The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of Dirac delta function.

In engineering and signal processing, the delta function, also known as the unit impulse symbol,[6] may be regarded through its Laplace transform, as coming from the boundary values of a complex analytic function of a complex variable. The formal rules obeyed by this function are part of the operational calculus, a standard tool kit of physics and engineering. In many applications, the Dirac delta is regarded as a kind of limit (a weak limit) of a sequence of functions having a tall spike at the origin. The approximating functions of the sequence are thus “approximate” or “nascent” delta functions.

 

然後深入其『動機』哩!?

Motivation and overview

The graph of the delta function is usually thought of as following the whole x-axis and the positive y-axis. The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point.

For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a delta function. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball by only considering the total impulse of the collision without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).

To be specific, suppose that a billiard ball is at rest. At time \displaystyle t=0 it is struck by another ball, imparting it with a momentum P, in \displaystyle {\text{kg m}}/{\text{s}} . The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is \displaystyle P\delta (t) . (The units of \displaystyle \delta (t) are \displaystyle s^{-1} .)

To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval \displaystyle \Delta t . That is,

\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t<\Delta t,\\0&{\text{otherwise}}.\end{cases}}

Then the momentum at any time t is found by integration:

\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t>\Delta t\\Pt/\Delta t&0<t<\Delta t\\0&{\text{otherwise.}}\end{cases}}

Now, the model situation of an instantaneous transfer of momentum requires taking the limit as \displaystyle \Delta t\to 0 , giving

\displaystyle p(t)={\begin{cases}P&t>0\\0&t\leq 0.\end{cases}}

Here the functions \displaystyle F_{\Delta t} are thought of as useful approximations to the idea of instantaneous transfer of momentum.

The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of ordinary calculus) \displaystyle \lim _{\Delta t\to 0}F_{\Delta t} is zero everywhere but a single point, where it is infinite. To make proper sense of the delta function, we should instead insist that the property

\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,

which holds for all \displaystyle \Delta t>0 , should continue to hold in the limit. So, in the equation \displaystyle F(t)=P\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t) , it is understood that the limit is always taken outside the integral.

In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.

Despite its name, the delta function is not truly a function, at least not a usual one with range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.

 

或許因為它具『偶函數』的性質

Properties

Scaling and symmetry

The delta function satisfies the following scaling property for a non-zero scalar α:[30]

\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}

and so

\displaystyle \delta (\alpha x)={\frac {\delta (x)}{|\alpha |}}.    

In particular, the delta function is an even distribution, in the sense that

\displaystyle \delta (-x)=\delta (x)

which is homogeneous of degree −1.

 

同時『連續函數』 f(x) 亦有

f(x) = \frac{f(x^{-}) + f(x^{+})}{2} 之理據,

故爾還是

\displaystyle \Delta (t)={\begin{cases}0,&t \le - \delta t \\\frac{1}{2 \delta t},& - \delta t < t < \delta t \\0,&t \ge \delta t \end{cases}}

\lim \limits_{\delta t \to 0} \Delta (t) = \delta (t)

以及

\displaystyle h (t)={\begin{cases}0,&t \le - \delta t \\ \frac{1}{2} + \frac{1}{2 \delta t} t,& - \delta t < t < \delta t \\1,&t \ge \delta t \end{cases}}

\lim \limits_{\delta t \to 0} \frac{d \ h(t)}{dt}  = \delta (t)

 

比較清晰明白的吧?!