STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧D‧前

微積分基本定理描述了微積分的兩個主要運算──微分積分之間的關係。

定理的第一部分,稱為微積分第一基本定理,表明不定積分是微分的逆運算。這一部分定理的重要之處在於它保證了某連續函數原函數的存在性。

定理的第二部分,稱為微積分第二基本定理或「牛頓-萊布尼茨公式」,表明定積分可以用無窮多個原函數的任意一個來計算。這一部分有很多實際應用,這是因為它大大簡化了定積分的計算。[1]

該定理的一個特殊形式,首先由詹姆斯·格里高利(1638-1675)證明和出版。[2]定理的一般形式,則由艾薩克·巴羅完成證明。

微積分基本定理表明,一個變量在一段時間之內的無窮小變化之和,等於該變量的淨變化。

我們從一個例子開始。假設有一個物體在直線上運動,其位置為 x(t),其中 為時間, x(t) 意味著 是 的函數。這個函數的導數等於位置的無窮小變化 d除以時間的無窮小變化 d(當然,該導數本身也與時間有關)。我們把速度定義為位置的變化除以時間的變化。用萊布尼茲記法

\displaystyle {\frac {dx}{dt}}=v(t).

整理,得

\displaystyle dx=v(t)\,dt.

根據以上的推理, \displaystyle x 的變化── \displaystyle \Delta x ,是 \displaystyle dx 的無窮小變化之和。它也等於導數和時間的無窮小乘積之和。這個無窮的和,就是積分;所以,一個函數求導之後再積分,得到的就是原來的函數。我們可以合理地推斷,這個運算反過來也成立,積分之後再求導,得到的也是原來的函數 。

歷史

詹姆斯·格里高利首先發表了該定理基本形式的幾何證明[3][4][5]艾薩克·巴羅證明了該定理的一般形式[6] 。巴羅的學生牛頓使微積分的相關理論得以完善。萊布尼茨使得相關理論實現體系化並引入了沿用至今微積分符號,

正式表述

微積分基本定理有兩個部分,第一部分是關於原函數的導數,第二部分描述了原函數和定積分之間的關係。

第一部分 / 第一基本定理

\displaystyle a,b\in \mathbb {R},設 \displaystyle f:[a,b]\longrightarrow \mathbb {R} 為黎曼可積的函數,定義

\displaystyle F:{\begin{cases}[a,b]&\longrightarrow \mathbb {R} \\x&\longmapsto \int _{a}^{x}\!f(t)\,dt\end{cases}}

如果 在 [a,b] 連續,則

第二部分 / 第二基本定理

\displaystyle a,b\in \mathbb {R} \quad a<b ,設 \displaystyle f,F:[a,b]\longrightarrow \mathbb {R} ,滿足

那麼,若 f  黎曼可積(例如   連續),則我們有

\displaystyle \int _{a}^{b}\,f(t)\,dt\,=F(b)-F(a)

Формула Ньютона-Лейбница (анимация)

─── 摘自維基百科《微積分基本定理》詞條

 

從泛函分析的角度來說,微積分是研究兩個線性算子:微分算子 \displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}} 和不定積分算子 \displaystyle \int _{0}^{t}

 

我們知道『微分』 \frac{d}{dt} 是『線性算子』︰

\frac{d}{dt} \left[ a \cdot f(t) + b \cdot g(t) \right] = a \cdot \frac{d \ f(t)}{dt} + b \cdot \frac{d \ g(t)}{dt}

也知道『積分』是『線性算子』︰

\int \limits_{0}^{t} \left[ a \cdot f(\tau) + b \cdot g(\tau) \right] \ d{\tau} = a \cdot \int\limits_{0}^{t} f(\tau) \ d{\tau} + b \cdot \int\limits_{0}^{t} g(\tau) \ d{\tau}

 

甚至將它稱之為『反導數』︰

Antiderivative

In calculus, an antiderivative, primitive function, primitive integral or indefinite integral[Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F = f.[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration) and its opposite operation is called differentiation, which is the process of finding a derivative.

Antiderivatives are related to definite integrals through the fundamental theorem of calculus: the definite integral of a function over an interval is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.

The discrete equivalent of the notion of antiderivative is antidifference.

 

然而『微積分基本定理』表明 \displaystyle \int _{0}^{t} \neq {\frac {\mathrm {d} }{\mathrm {d} t}}^{-1} 呦!

舉例來說︰

假設 \frac{d \ y(t)}{dt} = x(t) ,而且 y(0) = y_0 ,那麼

y(t) = y_0 + \int_{0}^{t} x(\tau}) \ d{\tau}

故知『形式推演』

\hat{L} \ y(t) = x(t) \ \Longrightarrow y(t) = \hat{L^{-1}} \ x(t)

能不嚴謹乎?

這個 y_0 就是所謂的『初始條件』 initial condition 也!!

或許正因

y_1(t) = y_0 + \int_{0}^{t} x_1 (\tau}) \ d{\tau}

y_2(t) = y_0 + \int_{0}^{t} x_2 (\tau}) \ d{\tau}

 

\therefore y_1(t) - y_2(t) = \int_{0}^{t} \left[ x_1 (\tau}) - x_2(\tau) \right] \ d{\tau}

易產生誤解耶??

所以方才強調 ZSR 與 ZIR 之區別哩!!??

Zero state response

In electrical circuit theory, the zero state response (ZSR), also known as the forced response is the behavior or response of a circuit with initial state of zero. The ZSR results only from the external inputs or driving functions of the circuit and not from the initial state. The ZSR is also called the forced or driven response of the circuit.

The total response of the circuit is the superposition of the ZSR and the ZIR, or Zero Input Response. The ZIR results only from the initial state of the circuit and not from any external drive. The ZIR is also called the natural response, and the resonant frequencies of the ZIR are called the natural frequencies. Given a description of a system in the s-domain, the zero-state response can be described as Y(s)=Init(s)/a(s) where a(s) and Init(s) are system-specific.

 

其實『初始值』 IVP 

Initial value problem

In mathematics, the field of differential equations, an initial value problem (also called the Cauchy problem by some authors) is an ordinary differential equation together with a specified value, called the initial condition, of the unknown function at a given point in the domain of the solution. In physics or other sciences, modeling a system frequently amounts to solving an initial value problem; in this context, the differential initial value is an equation that is an evolution equation specifying how, given initial conditions, the system will evolve with time.

Definition

An initial value problem is a differential equation

displaystyle f\colon \Omega \subset \mathbb {R} \times \mathbb {R} ^{n}\to \mathbb {R} ^{n} where \displaystyle \Omega is an open set of \displaystyle \mathbb {R} \times \mathbb {R} ^{n} ,

together with a point in the domain of \displaystyle f

\displaystyle (t_{0},y_{0})\in \Omega ,

called the initial condition.

A solution to an initial value problem is a function y{\displaystyle y}y that is a solution to the differential equation and satisfies

\displaystyle y(t_{0})=y_{0}.

In higher dimensions, the differential equation is replaced with a family of equations \displaystyle y_{i}'(t)=f_{i}(t,y_{1}(t),y_{2}(t),\dotsc ) , and \displaystyle y(t) is viewed as the vector \displaystyle (y_{1}(t),\dotsc ,y_{n}(t)) . More generally, the unknown function \displaystyle y can take values on infinite dimensional spaces, such as Banach spaces or spaces of distributions.

Initial value problems are extended to higher orders by treating the derivatives in the same way as an independent function, e.g. \displaystyle y''(t)=f(t,y(t),y'(t)) .

Existence and uniqueness of solutions

For a large class of initial value problems, the existence and uniqueness of a solution can be illustrated through the use of a calculator.

The Picard–Lindelöf theorem guarantees a unique solution on some interval containing t0 if ƒ is continuous on a region containing t0 and y0 and satisfies the Lipschitz condition on the variable y. The proof of this theorem proceeds by reformulating the problem as an equivalent integral equation. The integral can be considered an operator which maps one function into another, such that the solution is a fixed point of the operator. The Banach fixed point theorem is then invoked to show that there exists a unique fixed point, which is the solution of the initial value problem.

An older proof of the Picard–Lindelöf theorem constructs a sequence of functions which converge to the solution of the integral equation, and thus, the solution of the initial value problem. Such a construction is sometimes called “Picard’s method” or “the method of successive approximations”. This version is essentially a special case of the Banach fixed point theorem.

Hiroshi Okamura obtained a necessary and sufficient condition for the solution of an initial value problem to be unique. This condition has to do with the existence of a Lyapunov function for the system.

In some situations, the function ƒ is not of class C1, or even Lipschitz, so the usual result guaranteeing the local existence of a unique solution does not apply. The Peano existence theorem however proves that even for ƒ merely continuous, solutions are guaranteed to exist locally in time; the problem is that there is no guarantee of uniqueness. The result may be found in Coddington & Levinson (1955, Theorem 1.3) or Robinson (2001, Theorem 2.6). An even more general result is the Carathéodory existence theorem, which proves existence for some discontinuous functions ƒ.

 

問題古早勒,來自於『牛頓第二運動定律』

\vec{F}(t) = m \cdot \vec{a} (t) = m \cdot \frac{d^2}{{dt}^2} \vec{r} (t)

,需要確定 \vec{r} (0) 以及 \vec{v} (0) = \frac{d}{dt} \vec{r}  (0) 構成之『運動狀態』呢??!!

如是當知

Laplace Transform

Introduction

The definition of the Laplace Transform that we will use is called a “one-sided” (or unilateral) Laplace Transform and is given by:

The Laplace Transform seems, at first, to be a fairly abstract and esoteric concept.  In practice, it allows one to (more) easily solve a huge variety of problems that involve linear systems, particularly differential equations.  It allows for compact representation of systems (via the “Transfer Function”), it simplifies evaluation of the convolution integral, and it turns problems involving differential equations into algebraic problems.  As indicated by the quotes in the animation above (from some students at Swarthmore College), it almost magically simplifies problems that otherwise are very difficult to solve.

There are a few things to note about the Laplace Transform.

  • The function f(t), which is a function of time, is transformed to a function F(s).  The function F(s) is a function of the Laplace variable, “s.”  We call this a Laplace domain function.  So the Laplace Transform takes a time domain function, f(t), and converts it into a Laplace domain function, F(s).
  • We use a lowercase letter for the function in the time domain, and un uppercase letter in the Laplace domain.
  • We say that F(s) is the Laplace Transform of f(t),

    or that f(t) is the inverse Laplace Transform of F(s),

    or that f(t) and F(s) are a Laplace Transform pair,
  • For our purposes the time variable, t, and time domain functions will always be real-valued.  The Laplace variable, s, and Laplace domain functions are complex.
  • Since the integral goes from 0 to ∞, the time variable, t, must not occur in the Laplace domain result (if it does, you made a mistake).  Note that none of the Laplace Transforms in the table have the time variable, t, in them.
  • The lower limit on the integral is written as 0.  This indicates that the lower limit of the integral is from just before t=0 (t=0 indicates an infinitesimally small time before zero).  This is a fine point, but you will see that it is very important in two respects:
    • It lets us deal with the impulse function, δ(t).  If you don’t know anything about the impulse function yet, don’t worry, we’ll discuss it in some detail later.
    • It lets us consider the initial conditions of a system at t=0.   These are often much simpler to find than the initial conditions at t=0+ (which are needed by some other techniques used to solve differential equations).
  • Since the lower limit is zero, we will only be interested in the behavior of functions (and systems) for t≥0.
  • You will sometimes see discussed the “two-sided” (or bilateral) transform (with the lower limit written as -∞) or a one-sided transform with the lower limit written as 0+.  We will not use these forms and will not discuss them further.
  • Since the upper limit of the integral is ∞, we must ask ourselves if the Laplace Transform, F(s), even exists.  It turns out that the transform exists as long as f(t) doesn’t grow faster than an exponential function.  This includes all functions of interest to us, so we will not concern ourselves with existence.

……

© Copyright 2005 to 2015 Erik Cheever    This page may be freely used for educational purposes.

 

為何在意『當下』 0 之『前 0^{-}‧後 0^{+} 』的乎☆★