STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 I

易經《說卦傳》上《第十一章》講『震為雷為龍』,『』又是『』意思,所謂『帝出乎震』是說『春雷』振奮大地,也許正是張衡在候風地動儀上用『龍吐丸』的原因。據聞《南瞻部洲志》載

日有踆烏,月有蟾蜍。羿請不死之藥於西王母,嫦娥竊之以奔月。蟾蜍本乃嫦娥之茶寵,食余茶而化為仙獸,亦得仙奔月,是為魄 。初,月魄为三足,然其日食靈 芝,夜食月桂,歷三千年而修成四足 。后有吳剛者,為帝懲治而伐桂,斧起,樹創而瞬時癒之,歷八十一天始落一枝,月魄不勝其煩,遂銜桂枝而下界,有緣者可得其侍奉,謂之折桂也。

不知是否是因為『震位東方』而『兌在西方』為『』,所以才用『月魄蟾蜍』銜之。加之以篆文與鳥獸之形,儼然是個『神器』。

都

機

關

俗話說︰時過境遷。這個『時境原則』在閱讀『歷史』的文獻時非常重要。人們很容易不自覺的把『字詞』的『此時』意義強加於『彼時』,以至於發生了『誤讀』現象。

范曄文中之『中有柱,傍行八道,施』是講候風地動儀的內部構造,故為『要點』。『』字的造字本義是『有關卡城門把守的大城市』。『』字是指『將門閂插進左右兩栓孔,緊閉大門』。『』字是『事物發生的樞紐』。

在此我們將范曄之文,與『測知地震』有關的整理如下︰

一、『精銅鑄成,員徑八尺』,漢代一尺約現今 23.09 公分,可知直徑八尺的『酒尊』,真是『又大又重』。

二、『外有八龍,首銜銅丸,下有蟾蜍,張口承之』,平日沒地震時『銅丸』銜於『龍首』。

三、『其牙機巧制,皆隱在尊中,覆蓋周密無際。』,『密封的』很好,『牙巧』之『機制』隱於其中。

四、『如有地動,尊則振龍機發吐丸,而蟾蜍銜之。』,遇到地震時,『尊則振』同時『龍機發吐丸』,而且『蟾蜍銜之』。也就是說,地震會讓『』振動,且觸發了『牙機巧制』。

─── 《【SONIC Π】聲波之傳播原理︰拾遺篇《一》候風地動儀‧上

 

若無那事,又無那文

南朝劉宋時范曄所著《後漢書‧卷五十九‧張衡列傳第四十九》裡之『一百九十六』個字︰

陽嘉元年,夏造候風地動儀。以精銅鑄成,員徑八尺,合蓋隆起,形似酒尊,飾以篆文山龜鳥獸之形。中有都柱,傍行八道,施關發機。外有八龍,首銜銅丸,下有蟾 蜍,張口承之。其牙機巧制,皆隱在尊中,覆蓋周密無際。如有地動,尊則振龍機發吐丸,而蟾蜍銜之。振聲激揚,伺者因此覺知。雖一龍發機,而七首不動,尋其 方面,乃知震之所在。驗之以事,合契若神。自書典所記,未之有也。嘗一龍機發而地不覺動,京師學者咸怪其無征,後數日驛至,果地震隴西,於是皆服其妙。自 此以後,乃令史官記地動所從方起 。

 

空有註釋,還原恐無望矣。

也許剛過『血月‧火星衝』之時,恰好談到

電感

電感Inductance)是閉合迴路的一種屬性,即當通過閉合迴路的電流改變時,會出現電動勢來抵抗電流的改變。如果這種現象出現在自身迴路中,那麼這種電感稱為自感self-inductance),是閉合迴路自己本身的屬性。假設一個閉合迴路的電流改變,由於感應作用在另外一個閉合迴路中產生電動勢,這種電感稱為互感mutual inductance)。電感以方程式表達為

\displaystyle {\mathcal {E}}=-L{\mathrm {d} i \over \mathrm {d} t} 

其中, \displaystyle {\mathcal {E}} 是電動勢, \displaystyle L 是電感, \displaystyle i 是電流, \displaystyle t 是時間。

術語「電感」是1886年由奧利弗·赫維賽德命名[1]。通常自感是以字母「L」標記,這可能是為了紀念物理學家海因里希·冷次的貢獻[2][3]。互感是以字母「M」標記,是其英文(Mutual Inductance)的第一個字母。採用國際單位制,電感的單位是亨利henry),標記為「H」,是因美國科學家約瑟·亨利命名。1 H = 1 Wb/A

電感器是專門用在電路裏實現電感的電路元件螺線管是一種簡單的電感器,指的是多重捲繞的導線(稱為「線圈」),內部可以是空心的,或者有一個金屬芯。螺線管的電感是自感。變壓器是兩個耦合的線圈形成的電感器,由於具有互感屬性,是一種基本磁路元件。在電路圖中電感的電路符號多半以L開頭,例如,L01、L02、L100、L201等。

概述

應用馬克士威方程組,可以計算出電感。很多重要案例,經過簡化程序後,可以被解析。當涉及高頻率電流和伴隨的集膚效應,經過解析拉普拉斯方程式,可以得到面電流密度磁場。假設導體是纖細導線,自感仍舊跟導線半徑、內部電流分布有關。假若導線半徑超小於其它長度尺寸,則這電流分布可以近似為常數(在導線的表面或體積內部)。

自感

流動於閉合迴路的含時電流所產生的含時磁通量,會促使閉合迴路本身出現感應電動勢。

如上圖所示,流動於閉合迴路的含時電流 \displaystyle i(t) 所產生的含時磁通量 \displaystyle \Phi (i) ,根據法拉第電磁感應定律,會促使閉合迴路本身出現感應電動勢 \displaystyle {\mathcal {E}} :

\displaystyle {\mathcal {E}}=-N{{\mathrm {d} \Phi } \over \mathrm {d} t}=-N{{\mathrm {d} \Phi } \over \mathrm {d} i}\ {\mathrm {d} i \over \mathrm {d} t} 

其中, \displaystyle N 是閉合迴路的捲繞匝數。

設定電感 \displaystyle L 為

\displaystyle L=N{\frac {\mathrm {d} \Phi }{\mathrm {d} i}} 

則感應電動勢與含時電流之間的關係為

\displaystyle {\mathcal {E}}=-L{\mathrm {d} i \over \mathrm {d} t} 

由此可知,一個典型的電感元件中,在其幾何與物理特性都固定的狀況下,產生的電壓 \displaystyle v 為:

\displaystyle v=L{{\mathrm {d} i} \over \mathrm {d} t} 

電感的作用是抵抗電流的變化,但是這種作用與電阻阻礙電流的流動是有區別的。電阻阻礙電流的流動的特徵是消耗電能,而電感則純粹是抵抗電流的變化。當電流增加時電感抵抗電流的增加;當電流減小時電感抵抗電流的減小。電感抵抗電流變化的過程並不消耗電能,當電流增加時它會將能量磁場的形式暫時儲存起來,等到電流減小時它又會將磁場的能量釋放出來,其效應就是抵抗電流的變化。

互感

圖上方,閉合迴路 1 的含時電流 \displaystyle i_{1}(t) 所產生的含時磁通量,會促使閉合迴路 2 出現感應電動勢 \displaystyle {\mathcal {E}}_{2} 。圖下方,閉合迴路 2 的含時電流 \displaystyle i_{2}(t) 所產生的含時磁通量,會促使閉合迴路1出現感應電動勢 \displaystyle {\mathcal {E}}_{1} 。

如右圖所示,流動於閉合迴路 1 的含時電流 \displaystyle i_{1}(t) ,會產生磁通量 \displaystyle \Phi _{2}(t) 穿過閉合迴路 2 ,促使閉合迴路 2 出現感應電動勢 \displaystyle {\mathcal {E}}_{2} 。穿過閉合迴路2的磁通量和流動於閉合迴路1的含時電流,有線性關係,稱為互感  \displaystyle M_{21} ,以方程式表達為。

\displaystyle \Phi _{2}=M_{21}i_{1} 

計算互感,可使用紐曼公式Neumann formula):

  • \displaystyle M_{21}={\frac {\mu _{0}}{4\pi }}\oint _{\mathbb {C} _{1}}\oint _{\mathbb {C} _{2}}{\frac {\mathrm {d} {\boldsymbol {\ell }}_{1}\cdot \mathrm {d} {\boldsymbol {\ell }}_{2}}{|\mathbf {X} _{2}-\mathbf {X} _{1}|}}

其中, \displaystyle \mu _{0} 是磁常數\displaystyle \mathbb {C} _{1} 是閉合迴路 1 , \displaystyle \mathbb {C} _{2} 是閉合迴路 2 , \displaystyle \mathbf {X} _{1} 是微小線元素 \displaystyle \mathrm {d} {\boldsymbol {\ell }}_{1} 的位置, \displaystyle \mathbf {X} _{2} 是微小線元素 \displaystyle \mathrm {d} {\boldsymbol {\ell }}_{2} 的位置。

由此公式可見,兩個線圈之間互感相同: \displaystyle M_{12}=M_{21} ,且互感是由兩個線圈的形狀、尺寸和相對位置而確定。

心緒為之『震動』,不知那『Magnetic monopole』找到了沒?誰人折桂乎??

磁單極子

絕對無法從磁棒製備出磁單極子。假設將磁棒一切為二,則不會發生一半是指北極,另一半是指南極的狀況,而會是切開的每一個部分都有其自己的指北極與指南極。

理論物理學中,磁單極子是假設的僅帶有北極或南極的單一磁極的基本粒子(類似於只帶負電荷的電子),它們的磁感線分布類似於點電荷的電場線分布。更專業地說,這種粒子是一種帶有一個單位「磁荷」(類比於電荷)的粒子。科學界之所以對磁單極子如此感興趣,是因為磁單極子在粒子物理學當中的重要性,尚未被實驗證實的大一統理論超弦理論都預測了它的存在。這種物質的存在性在科學界時有紛爭,截至2018年,尚未發現以基本粒子形式存在的磁單極子。可以說是21世紀物理學界重要的研究主題之一。一般觀測到的磁雙極可能是由兩個相反方向且非常難分割的磁單極子組成 。[來源請求]

按照目前已被證實的物理學理論,磁現象是由運動電荷產生的,尚未證實磁單極子的存在。

非孤立的磁單極准粒子確實存在於某些凝聚體物質系統中,人工磁單極子已經被德國的一組研究者成功地製造出來。[1]但它們並非假設的基本粒子。

歷史

1269年,彼德勒斯·佩雷格林納斯在一封書信裏提到,磁石必會有兩極,「南極」與「北極」。[2]19世紀早期,安德烈-馬里·安培將這論述提升為假說[3]:19

目前的庫侖定律只是針對電的定律,實際上當時,查爾斯·庫侖也提出了磁的庫侖定律,認為,兩個磁荷間受到的力,與磁荷所帶磁的大小成正比,與兩個磁荷間的距離的平方成反比。但是,後來,隨著安培定律等的發現,人們逐漸意識到,磁現象是由運動電荷產生的,沒有獨立的磁荷,因此,磁的庫侖定律就被拋棄了。[來源請求]

英國物理學家保羅·狄拉克在1931年給出磁荷的量子理論。[4]他的論文闡明,假若在宇宙裏有任何磁荷存在,則所有在宇宙裏的電荷量必須量子化。這條件稱為「狄拉克量子化條件」。物理學者做實驗發現,電荷量的基本單位為基本電荷,這事實與磁單極子的存在相符合,但並未證實磁單極子的存在。[5]:362

自此之後,許多物理學家開始了尋找磁單極子的工作。通過種種方式尋找磁單極子包括使用粒子加速器人工製造磁單極子均無收穫。1975年,美國的科學家利用高空氣球來探測地球大氣層外的宇宙輻射時偶然發現了一條軌跡,當時科學家們分析認為這條軌跡便是磁單極子所留下的軌跡。1982年2月14日,在美國史丹福大學物理系做研究的布拉斯·卡布雷拉宣稱他利用超導線圈發現了磁單極子,然而事後他在重複他先前的實驗時卻未得到先前探測到的磁單極子,最終未能證實磁單極子的存在。內森·塞伯格Nathan Seiberg)和愛德華·維騰兩位美國物理學家於1994年首次證明出磁單極子存在理論上的可能性。

概念

如果將帶有磁性的金屬棒截斷為二,新得到的兩根磁棒則會「自動地」產生新的磁場,重新編排磁場的北極、南極,原先的北極南極兩極在截斷磁棒後會轉換成四極各磁棒一南一北。如果繼續截下去 ,磁場也同時會繼續改變磁場的分布,每段磁棒總是會有相應的南北兩極。不少科學家因此認為磁極在宇宙中總是南北兩極互補分離 ,成對的出現,對磁單極子的存在質疑。也有理論認為,磁單極子不是以基本粒子的形式存在,而是以自旋冰(spin ice)等奇異的凝聚體物質系統中的出射粒子的形式存在[6]

馬克士威方程組

馬克士威的電磁學方程組將電場、磁場及電荷的運動聯繫在了一起 。標準的馬克士威方程式中只描述了電荷,而假定不存在「磁荷」 。除了這一點不同以外,馬克士威方程式在電場和磁場的互換中具有對稱性。事實上,如果假定所有的電荷都為零(應此電流也為零 ),則可以寫出具有完全對稱性的馬克士威方程式,這實際上就是得出電磁波方程式的方法。

當然,還有另一種方法來寫出具有完全對稱性的馬克士威方程組,那就是允許與電荷相似的「磁荷」的存在。這樣方程組中就會出現「磁荷密度」ρm這個變量,於是方程組中也就又會出現「磁流密度 」jm這個變量。

但如果磁荷實際上不存在,或者它不再宇宙中任何地方出現,那麼方程組中的這些新變量就都為0,那麼延伸後的馬克士威方程組就自然退化為通常的電磁學方程組,∇⋅B = 0(這裡∇⋅代表散度,而BB)。

左圖:靜電荷和靜磁荷所產生的場。右圖:速度v運動時,電荷激起B場,而磁荷激起E場。帶荷粒子移動的方向即是電流和磁流的方向。

右圖:電偶極矩d的電雙級的E場。左下圖:數學上由兩個磁單極子組成的磁偶極矩m磁偶極所產生的B場。右下圖:存在於真實物質之中的自然的磁偶極矩m的磁偶極(不由磁單極子組成)所產生的B場。

 

還是它果真由不可思議的『電子』這樣的『點粒子 Spin 』所緣生的耶? ?!!

自旋

量子力學中,自旋英語:Spin)是粒子所具有的內稟性質,其運算規則類似於古典力學角動量,並因此產生一個磁場。雖然有時會與古典力學中的自轉(例如行星公轉時同時進行的自轉)相類比,但實際上本質是迥異的。古典概念中的自轉,是物體對於其質心旋轉,比如地球每日的自轉是順著一個通過地心的極軸所作的轉動。

首先對基本粒子提出自轉與相應角動量概念的是 1925 年由拉爾夫·克羅尼希喬治·烏倫貝克山繆·古德斯密特三人所開創。他們在處理電子的磁場理論時,把電子想像為一個帶電的球體,自轉因而產生磁場。後來在量子力學中,透過理論以及實驗驗證發現基本粒子可視為是不可分割的點粒子,所以物體自轉無法直接套用到自旋角動量上來,因此僅能將自旋視為一種內稟性質,為粒子與生俱來帶有的一種角動量,並且其量值是量子化的,無法被改變(但自旋角動量的指向可以透過操作來改變)。

自旋對原子尺度的系統格外重要,諸如單一原子質子電子甚至是光子,都帶有正半奇數(1/2、3/2等等)或含零正整數(0、1、2)的自旋;半整數自旋的粒子被稱為費米子(如電子),整數的則稱為玻色子(如光子)。複合粒子也帶有自旋,其由組成粒子(可能是基本粒子)之自旋透過加法所得;例如質子的自旋可以從夸克自旋得到。

概論

自旋角動量是系統的一個可觀測量,它在空間中的三個分量和軌道角動量一樣滿足相同的對易關係。每個粒子都具有特有的自旋。粒子自旋角動量遵從角動量的普遍規律,p=[J(J+1)]0.5h,此為自旋角動量量子數 ,J = 0,1 / 2,1,3/2,……。自旋為半奇數的粒子稱為費米子,服從費米-狄拉克統計;自旋為0或整數的粒子稱為玻色子,服從玻色-愛因斯坦統計。複合粒子的自旋是其內部各組成部分之間相對軌道角動量和各組成部分自旋的向量和,即按量子力學中角動量相加法則求和。已發現的粒子中,自旋為整數的,最大自旋為4;自旋為半奇數的,最大自旋為3/2。

自旋是微觀粒子的一種性質,沒有古典對應,是一種全新的內稟自由度。自旋為半奇數的物質粒子服從包立不相容原理

發展史

自旋的發現,首先出現在鹼金屬元素的發射光譜課題中。於1924年,包立首先引入他稱為是「雙值量子自由度」(two-valued quantum degree of freedom),與最外殼層的電子有關。這使他可以形式化地表述包立不相容原理,即沒有兩個電子可以在同一時間共享相同的量子態

包立的「自由度」的物理解釋最初是未知的。遶夫·克勒尼希朗德的一位助手,於1925年初提出它是由電子的自轉產生的。當包立聽到這個想法時,他予以嚴厲的批駁,他指出為了產生足夠的角動量 ,電子的假想表面必須以超過光速運動。這將違反相對論。很大程度上由於包立的批評,克勒尼希決定不發表他的想法。

當年秋天,兩個年輕的荷蘭物理學家產生了同樣的想法,它們是烏倫貝克撒穆爾·古德施密特。在保羅·埃倫費斯特的建議下,他們以一個小篇幅發表了他們的結果。它得到了正面的反應,特別是在雷沃林·托馬斯消除了實驗結果與烏倫貝克和古德施密特的(以及克勒尼希未發表的)計算之間的兩個矛盾的係數之後。這個矛盾是由於電子指向的切向結構必須納入計算,附加到它的位置上;以數學語言來說,需要一個纖維叢描述。切向叢效應是相加性的和相對論性的(比如在c趨近於無限時它消失了);在沒有考慮切向空間朝向時其值只有一半,而且符號相反。因此這個複合效應與後來的相差了一個係數2(參見:湯瑪斯進動)。

儘管他最初反對這個想法,包立還是在1927年形式化了自旋理論,運用了埃爾文·薛丁格沃納·海森堡發現的現代量子力學理論。他開拓性地使用包立矩陣作為一個自旋算子的群表述,並且引入了一個二元旋量波函數。

包立的自旋理論是非相對論性的。然而,在1928年,保羅·狄拉克發表了狄拉克方程式,描述了相對論性的電子。在狄拉克方程式中,一個四元旋量(所謂的「狄拉克旋量」)被用於電子波函數。在1940年,包立證明了「自旋統計定理」,它表述了費米子具有半整數自旋,玻色子具有整數自旋。

 

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧E

派生碼訊

卯 兔

老子《道德經》第四十章

反者道之動,
弱者道之用。
天下萬物生于有,
有生于無。

︰《 易 》易曰︰復:亨。 出入無疾,朋來無咎。 反復其道,七日來復,利有攸往 。彖曰:復亨﹔剛反,動而以順行,是以出入無疾,朋來無咎。 反復其道,七日來復,天行也。 利有攸往,剛長也。 復其見天地之心乎?象曰:雷在地中,復﹔先王以至日閉關,商旅不行,后不省方。

派︰《 λ 運算︰概念導引之《補充》※真假祇是個選擇?? 》文中講︰作者不知義大利羅馬的『真理之口』將會如何來決定『何謂是真 ?』而『什麼又是假』的呢??

又為什麼『』與『』的 λ表達式是

TRUE =_{df} (\lambda x. ( \lambda y. x))
FALSE =_{df} (\lambda x .( \lambda y. y))

的呢?如果我們將『運算』看成『黑箱』,用『實驗』的方法來『研究』輸入輸出的『關係』,這一組有兩個輸入端的黑箱,對於任意的輸入『二元組』pair  (u, v),有︰

(((\lambda x. ( \lambda y. x)) u) v) = u
(((\lambda x. ( \lambda y. y)) u) v) = v

,於是將結論歸結成︰貼『』標籤的箱子的『作用』是『選擇』輸入的『第一項』將之輸出;而貼『』標籤的箱子的『作用』是『選擇』輸入的『第二項』將之輸出。

假使一位『軟體』工程師在函式『除錯』時,可能會採取在那個『函式』內『輸出』看看得到的『輸入』參數值是否正確?

於是將結論歸結成︰』標識符的函式『作用』是『選擇』輸入參數的『第一項』;『』標識符的函式『作用』是『選擇』輸入參數的『第二項』。

那麼對一個已經打開的『白箱』,又知道作用的『函式』,怎麼會概念上『一頭霧水』的呢??如果細思一個邱奇自然數『 0 』, 0 =_{df} (\lambda f. ( \lambda x. x)),這跟『』的 λ表達式有什麼不一樣的呢?那難道我們能說『0』就是『』的嗎?在《布林代數》中的『0』與『1』其實是未定義的『兩態』基元概念── 就像歐式幾何學裡的『』、『』和『』是『基本』概念一樣 ──,因此不管說它是『電壓高低』或者講它是『電流有無』的『數位設計』可以應用布林代數。要是我們將『0』『1』與『』『』概念連繫起來看,『布林邏輯』就是『真假』是什麼的『系統化』之概念內涵開展,它的『整體內容』呈現『兩態邏輯』的『方方面面』,縱使至於『孤虛』NAND 一個邏輯概念就足夠了,對於『 0 與 1 』概念本身還是『三緘其口』。……

『孤虛者』有言︰

物有無者,非真假也。苟日新,日日新,又日新。真假者,物之論也。論也者,當或不當而已矣。故世有孤虛者,言有孤虛論。

可以『中行獨復,以從道也。』,不至『迷復,凶』矣!

試問彼此井通,『彼』之『出』為『此』之『入』;『此』之『出』為『彼』之『入』。若以『此』觀『出入』者,實乃『彼』之『入出』也。故知所謂『出入』,相對『己我』所定之『名義』 ,存立論之所也。因而推知『有無』者『天地』之『然或不然』;『真假』者『理則』之『當或不當』。倘將『有無』匹配『真假 』,終有『正反』兩說,『正言正說』── 真有,假無 ── 以及『正言若反』── 真無,假有 ── ,各站其『立場』者耶!!

生 《 網 》網上說︰

Design How-To
 

Logic 101 – Part 2 – Positive vs Negative Logic

Clive Maxfield
11/21/2006 04:00 AM EST

The terms positive logic and negative logic refer to two conventions that dictate the relationship between logical values and the physical voltages used to represent them. Unfortunately, although the core concepts are relatively simple, fully comprehending all of the implications associated with these conventions requires an exercise in lateral thinking sufficient to make even the strongest amongst us break down and weep!

Before plunging into the fray, it is important to understand that logic 0 and logic 1 are always equivalent to the Boolean logic concepts of False and True, respectively (unless you’re really taking a walk on the wild side, in which case all bets are off). The reason these terms are used interchangeably is that digital functions can be considered to represent either logical or arithmetic operations (Fig 1).


1. Logical versus arithmetic views of a digital function.
 

Having said this, it is generally preferable to employ a single consistent format to cover both cases, and it is easier to view logical operations in terms of “0s” and “1s” than it is to view arithmetic operations in terms of “Fs” and “Ts”. The key point to remember as we go forward is that logic 0 and logic 1 are logical concepts that have no direct relationship to any physical values.

Physical-to-abstract mapping (NMOS logic)
OK, let’s gird up our loins and meander our way through the morass one step at a time. The process of relating logical values to physical voltages begins by defining the frames of reference to be used. One absolute frame of reference is provided by truth tables, which are always associated with specific functions (Fig 2).


2. Absolute relationships between truth tables and functions.
 

Another absolute frame of reference is found in the physical world, where specific voltage levels applied to the inputs of a digital function cause corresponding voltage responses on the outputs. These relationships can also be represented in truth table form. Consider a logic gate constructed using only NMOS transistors (Fig 3).


3. The physical mapping of an NMOS logic gate.
 

With NMOS transistors connected as shown in Fig 3, an input connected to the more negative Vss turns that transistor OFF, and an input connected to the more positive Vdd turns that transistor ON. The final step is to define the mapping between the physical and abstract worlds; either 0v is mapped to False and +ve is mapped to True, or vice versa (Fig 4).


4. The physical to abstract mapping of an NMOS logic gate.
 

Using the positive logic convention, the more positive potential is considered to represent True and the more negative potential is considered to represent False (hence, positive logic is also known as positive-true). By comparison, using the negative logic convention, the more negative potential is considered to represent True and the more positive potential is considered to represent False (hence, negative logic is also known as negative-true). Thus, this circuit may be considered to be performing either a NAND function in positive logic or a NOR function in negative logic. (Are we having fun yet?)

─── 《M♪O 之學習筆記本《卯》基件︰【䷗】正言若反

 

『純』者,白賁无咎乎?用之於『數學』,思維奔馳於『抽象世界 』中;用之於『物理』,想象悠遊於『宇宙自然』裡!

不知就『純邏輯』而言,它們是否都算『應用』耶?

『孤虛者』借物寓意,『數學』不必是『物理』也!?

所以在科學以及工程實務領域,鮮少談及

『兩邊‧拉普拉斯變換』哩?!

Two-sided Laplace transform

In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability‘s moment generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, and the ordinary or one-sided Laplace transform. If ƒ(t) is a real or complex valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral

\displaystyle {\mathcal {B}}\{f\}(s)=F(s)=\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.

The integral is most commonly understood as an improper integral, which converges if and only if each of the integrals

\displaystyle \int _{0}^{\infty }e^{-st}f(t)\,dt,\quad \int _{-\infty }^{0}e^{-st}f(t)\,dt

exists. There seems to be no generally accepted notation for the two-sided transform; the \displaystyle {\mathcal {B}} used here recalls “bilateral”. The two-sided transform used by some authors is

\displaystyle {\mathcal {T}}\{f\}(s)=s{\mathcal {B}}\{f\}(s)=sF(s)=s\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.

In pure mathematics the argument t can be any variable, and Laplace transforms are used to study how differential operators transform the function.

In science and engineering applications, the argument t often represents time (in seconds), and the function ƒ(t) often represents a signal or waveform that varies with time. In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time t cannot depend on an output which is a higher value of t. In population ecology, the argument t often represents spatial displacement in a dispersal kernel.

When working with functions of time, ƒ(t) is called the time domain representation of the signal, while F(s) is called the s-domain (or Laplace domain) representation. The inverse transformation then represents a synthesis of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the analysis of the signal into its frequency components.

Relationship to other integral transforms

If u is the Heaviside step function, equal to zero when its argument is less than zero, to one-half when its argument equals zero, and to one when its argument is greater than zero, then the Laplace transform \displaystyle {\mathcal {L}} may be defined in terms of the two-sided Laplace transform by

\displaystyle {\mathcal {L}}\{f\}={\mathcal {B}}\{fu\}.

On the other hand, we also have

\displaystyle {\mathcal {B}}\{f\}={\mathcal {L}}\{f\}+{\mathcal {L}}\{f\circ m\}\circ m,

where \displaystyle m:\mathbb {R} \to \mathbb {R} is the function that multiplies by minus one ( \displaystyle m(x):=-x\quad \forall x\in \mathbb {R} ), so either version of the Laplace transform can be defined in terms of the other.

The Mellin transform may be defined in terms of the two-sided Laplace transform by

\displaystyle {\mathcal {M}}\{f\}={\mathcal {B}}\{f\circ \exp \circ m\},

with \displaystyle m as above, and conversely we can get the two-sided transform from the Mellin transform by

\displaystyle {\mathcal {B}}\{f\}={\mathcal {M}}\{f\circ m\circ \log \}.

The Fourier transform may also be defined in terms of the two-sided Laplace transform; here instead of having the same image with differing originals, we have the same original but different images. We may define the Fourier transform as

\displaystyle {\mathcal {F}}\{f(t)\}=F(s=i\omega )=F(\omega ).

Note that definitions of the Fourier transform differ, and in particular

\displaystyle {\mathcal {F}}\{f(t)\}=F(s=i\omega )={\frac {1}{\sqrt {2\pi }}}{\mathcal {B}}\{f(t)\}(s)

is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as

\displaystyle {\mathcal {B}}\{f(t)\}(s)={\mathcal {F}}\{f(t)\}(-is).

The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a strip \displaystyle a<\Im (s)<b which may not include the real axis.

The moment-generating function of a continuous probability density function ƒ(x) can be expressed as \displaystyle {\mathcal {B}}\{f\}(-s) .

……

Properties

It has basically the same properties of the unilateral transform with an important difference

Properties of the unilateral Laplace transform
  Time domain unilateral-‘s’ domain bilateral-‘s’ domain
Differentiation \displaystyle f'(t) \displaystyle sF(s)-f(0) \displaystyle sF(s)
Second Differentiation \displaystyle f''(t) \displaystyle s^{2}F(s)-sf(0)-f'(0) \displaystyle s^{2}F(s)

………

Causality

Bilateral transforms do not respect causality. They make sense when applied over generic functions but when working with functions of time (signals) unilateral transforms are preferred.

 

此處略提,不過是歸結耳。

Bilateral Laplace transform

 

When one says “the Laplace transform” without qualification, the unilateral or one-sided transform is normally intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or two-sided Laplace transform by extending the limits of integration to be the entire real axis. If that is done the common unilateral transform simply becomes a special case of the bilateral transform where the definition of the function being transformed is multiplied by the Heaviside step function.

The bilateral Laplace transform is defined as follows,

\displaystyle {\mathcal {B}}\{f\}(s)=\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.

Inverse Laplace transform

 

Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is a one-to-one mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range. Typical function spaces in which this is true include the spaces of bounded continuous functions, the space L(0, ∞), or more generally tempered functions (that is, functions of at worst polynomial growth) on (0, ∞). The Laplace transform is also defined and injective for suitable spaces of tempered distributions.

In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the Fourier–Mellin integral, and Mellin’s inverse formula):

\displaystyle f(t)={\mathcal {L}}^{-1}\{F\}(t)={\frac {1}{2\pi i}}\lim _{T\to \infty }\int _{\gamma -iT}^{\gamma +iT}e^{st}F(s)\,ds,

where γ is a real number so that the contour path of integration is in the region of convergence of F(s). An alternative formula for the inverse Laplace transform is given by Post’s inversion formula. The limit here is interpreted in the weak-* topology.

In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table, and construct the inverse by inspection.

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧D‧後

時至棒擊球飛之際︰

 

且已明白『時移算子』意義知其確指︰

Time-invariant system

A time-invariant (TIV) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends only indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a “time-varying system”.

Mathematically speaking, “time-invariance” of a system is the following property:

Given a system with a time-dependent output function \displaystyle y(t) ,and a time-dependent input function \displaystyle x(t) ; the system will be considered time-invariant if a time-delay on the input \displaystyle x(t+\delta ) directly equates to a time-delay of the output \displaystyle y(t+\delta ) function. For example, if time \displaystyle t is “elapsed time”, then “time-invariance” implies that the relationship between the input function \displaystyle x(t) and the output function \displaystyle y(t) is constant with respect to time \displaystyle t :
\displaystyle y(t)=f(x(t),t)=f(x(t))

In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output.

In the context of a system schematic, this property can also be stated as follows:

If a system is time-invariant then the system block commutes with an arbitrary delay.

If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems.

Abstract example

We can denote the shift operator by \displaystyle \mathbb {T} _{r} where \displaystyle r is the amount by which a vector’s index set should be shifted. For example, the “advance-by-1” system

\displaystyle x(t+1)=\,\!\delta (t+1)*x(t)

can be represented in this abstract notation by

\displaystyle {\tilde {x}}_{1}=\mathbb {T} _{1}\,{\tilde {x}}

where \displaystyle {\tilde {x}} is a function given by

\displaystyle {\tilde {x}}=x(t)\,\forall \,t\in \mathbb {R}

with the system yielding the shifted output

\displaystyle {\tilde {x}}_{1}=x(t+1)\,\forall \,t\in \mathbb {R}

So \displaystyle \mathbb {T} _{1} is an operator that advances the input vector by 1.

Suppose we represent a system by an operator \displaystyle \mathbb {H} . This system is time-invariant if it commutes with the shift operator, i.e.,

\displaystyle \mathbb {T} _{r}\,\mathbb {H} =\mathbb {H} \,\mathbb {T} _{r}\,\,\forall \,r

If our system equation is given by

\displaystyle {\tilde {y}}=\mathbb {H} \,{\tilde {x}}

then it is time-invariant if we can apply the system operator \displaystyle \mathbb {H} on \displaystyle {\tilde {x}} followed by the shift operator \displaystyle \mathbb {T} _{r} , or we can apply the shift operator \displaystyle \mathbb {T} _{r} followed by the system operator \displaystyle \mathbb {H} , with the two computations yielding equivalent results.

Applying the system operator first gives

\displaystyle \mathbb {T} _{r}\,\mathbb {H} \,{\tilde {x}}=\mathbb {T} _{r}\,{\tilde {y}}={\tilde {y}}_{r}

Applying the shift operator first gives

\displaystyle \mathbb {H} \,\mathbb {T} _{r}\,{\tilde {x}}=\mathbb {H} \,{\tilde {x}}_{r}

If the system is time-invariant, then

\displaystyle \mathbb {H} \,{\tilde {x}}_{r}={\tilde {y}}_{r}

 

故曉拉普拉斯變換實論述『當下眼前』也◎

Laplace Transform

Properties

Time Delay

The time delay property is not much harder to prove, but there are some subtleties involved in understanding how to apply it.  We’ll start with the statement of the property, followed by the proof, and then followed by some examples.  The time shift property states

We again prove by going back to the original definition of the Laplace Transform

Because

we can change the lower limit of the integral from 0 to a and drop the step function (because it is always equal to one)

We can make a change of variable

The last integral is just the definition of the Laplace Transform, so we have the time delay property

To properly apply the time delay property it is important that both the function and the step that multiplies it are both shifted by the same amount.  As an example, consider the function f(t)=t·γ(t).  If we delay by 2 seconds it we get (t-2)·γ(t-2), not (t-2)·γ(t) or t·γ(t-2).  All four of these function are shown below.

 

The correct one is exactly like the original function but shifted.

Important: To apply the time delay property you must multiply a delayed version of your function by a delayed step.  If the original function is  g(t)·γ(t), then the shifted function is g(t-td)·γ(t-td) where td is the time delay.

© Copyright 2005 to 2015 Erik Cheever    This page may be freely used for educational purposes.

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧D

借由觀察

Laplace Transform

Properties

First Derivative

The first derivative property of the Laplace Transform states

To prove this we start with the definition of the Laplace Transform and integrate by parts

The first term in the brackets goes to zero (as long as f(t) doesn’t grow faster than an exponential which was a condition for existence of the transform).  In the next term, the exponential goes to one.  The last term is simply the definition of the Laplace Transform multiplied by s.  So the theorem is proved.

There are two significant things to note about this property:

  • We have taken a derivative in the time domain, and turned it into an algebraic equation in the Laplace domain.  This means that we can take differential equations in time, and turn them into algebraic equations in the Laplace domain.  We can solve the algebraic equations, and then convert back into the time domain (this is called the Inverse Laplace Transform, and is described later).
  • The initial conditions are taken at t=0.  This means that we only need to know the initial conditions before our input starts.  This is often much easier than finding them at t=0+.

Second Derivative

Similarly for the second derivative we can show:

where

Nth order Derivative

For the nth derivative:

or

where

Key Concept: The differentiation property of the Laplace Transform

We will use the differentiation property widely.  It is repeated below (for first, second and nth order derivatives)



 

之『微分性質』裡面有

\frac{d^n}{dt^n} f(0^{-})

『初始項』,如果它們全部為『零』,就是所謂的『零態』 Zero state 了。無怪乎!也有人將之稱作『初始靜止』 Initial rest 條件。它所對應的就是 Zero State Response︰

Zero state response and zero input response in integrator and differentiator circuits

One example of zero state response being used is in integrator and differentiator circuits. By examining a simple integrator circuit it can be demonstrated that when a function is put into a linear time-invariant (LTI) system, an output can be characterized by asuperposition or sum of the Zero Input Response and the zero state response.

A system can be represented as

\displaystyle f(t) System Input Output.JPG \displaystyle y(t)=y(t_{0})+\int _{t_{0}}^{t}f(\tau )d\tau

with the input \displaystyle f(t). on the left and the output \displaystyle y(t). on the right.

The output \displaystyle y(t). can be separated into a zero input and a zero state solution with

\displaystyle y(t)=\underbrace {y(t_{0})} _{Zero-input\ response}+\underbrace {\int _{t_{0}}^{t}f(\tau )d\tau } _{Zero-state\ response}.

The contributions of \displaystyle y(t_{0}) and \displaystyle f(t) to output \displaystyle y(t) are additive and each contribution \displaystyle y(t_{0}) and \displaystyle \int _{t_{0}}^{t}f(\tau )d\tau vanishes with vanishing \displaystyle y(t_{0}) and \displaystyle f(t).

This behavior constitutes a linear system. A linear system has an output that is a sum of distinct zero-input and zero-state components, each varying linearly, with the initial state of the system and the input of the system respectively.

The zero input response and zero state response are independent of each other and therefore each component can be computed independently of the other.

 

反之 Zero Input Response ,顧名思義是指沒有『外部輸入』時,

\frac{d^n}{dt^n} f(0^{-})

自生的響應也。

此時再對照拉普拉斯變換的『積分性質』

Integration

The integration theorem states that

We prove it by starting by integration by parts


The first term in the brackets goes to zero if f(t) grows more slowly than an exponential (one of our requirements for existence of the Laplace Transform), and the second term goes to zero because the limits on the integral are equal.  So the theorem is proven

Example: Find Laplace Transform of Step and Ramp using Integration Property

Given that the Laplace Transform of the impulse δ(t) is Δ(s)=1, find the Laplace Transform of the step and ramp.

Solution:
We know that

so that

Likewise:

 

當更能體會『微積分基本定理』耶?

那麼又該怎麼理解『拉普拉斯變換』之『前 0^{-} 、中 0 、後 0^{+} 』的呢??

Initial Value Theorem

The initial value theorem states

To show this, we first start with the Derivative Rule:

We then invoke the definition of the Laplace Transform, and split the integral into two parts:

We take the limit as s→∞:

Several simplifications are in order.  In the left hand expression, we can take the second term out of the limit, since it doesn’t depend on ‘s.’  In the right hand expression, we can take the first term out of the limit for the same reason, and if we substitute infinity for ‘s’ in the second term, the exponential term goes to zero:

The two f(0) terms cancel each other, and we are left with the Initial Value Theorem

This theorem only works if F(s) is a strictly proper fraction in which the numerator polynomial is of lower order then the denominator polynomial. In other words is will work for F(s)=1/(s+1) but not F(s)=s/(s+1).

………

Final Value Theorem

The final value theorem states that if a final value of a function exists that

However, we can only use the final value if the value exists (function like sine, cosine and the ramp function don’t have final values).  To prove the final value theorem, we start as we did for the initial value theorem, with the Laplace Transform of the derivative,

We let s→0,

As s→0 the exponential term disappears from the integral.  Also, we can take f(0-) out of the limit (since it doesn’t depend on s)

We can evaluate the integral

Neither term on the left depends on s, so we can remove the limit and simplify, resulting in the final value theorem

Examples of functions for which this theorem can’t be used are increasing exponentials (like eat where a is a positive number) that go to infinity as t increases, and oscillating functions like sine and cosine that don’t have a final value..

© Copyright 2005 to 2015 Erik Cheever    This page may be freely used for educational purposes.

 

此事最好先知道『狄拉克 δ 函數』之來歷呦!!

Dirac delta function

The Dirac delta function as the limit (in the sense of distributions) of the sequence of zero-centered normal distributions \displaystyle \delta _{a}(x)={\frac {1}{\left|a\right|{\sqrt {\pi }}}}\mathrm {e} ^{-(x/a)^{2}} as \displaystyle a\rightarrow 0 .

In mathematics, the Dirac delta function (δ function) is a generalized function or distribution introduced by the physicist Paul Dirac. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one.[1][2][3] As there is no function that has these properties, the computations made by the theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. Thus, the Dirac delta function is a linear functional that maps every function to its value at zero.[4][5] The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of Dirac delta function.

In engineering and signal processing, the delta function, also known as the unit impulse symbol,[6] may be regarded through its Laplace transform, as coming from the boundary values of a complex analytic function of a complex variable. The formal rules obeyed by this function are part of the operational calculus, a standard tool kit of physics and engineering. In many applications, the Dirac delta is regarded as a kind of limit (a weak limit) of a sequence of functions having a tall spike at the origin. The approximating functions of the sequence are thus “approximate” or “nascent” delta functions.

 

然後深入其『動機』哩!?

Motivation and overview

The graph of the delta function is usually thought of as following the whole x-axis and the positive y-axis. The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point.

For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a delta function. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball by only considering the total impulse of the collision without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).

To be specific, suppose that a billiard ball is at rest. At time \displaystyle t=0 it is struck by another ball, imparting it with a momentum P, in \displaystyle {\text{kg m}}/{\text{s}} . The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is \displaystyle P\delta (t) . (The units of \displaystyle \delta (t) are \displaystyle s^{-1} .)

To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval \displaystyle \Delta t . That is,

\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t<\Delta t,\\0&{\text{otherwise}}.\end{cases}}

Then the momentum at any time t is found by integration:

\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t>\Delta t\\Pt/\Delta t&0<t<\Delta t\\0&{\text{otherwise.}}\end{cases}}

Now, the model situation of an instantaneous transfer of momentum requires taking the limit as \displaystyle \Delta t\to 0 , giving

\displaystyle p(t)={\begin{cases}P&t>0\\0&t\leq 0.\end{cases}}

Here the functions \displaystyle F_{\Delta t} are thought of as useful approximations to the idea of instantaneous transfer of momentum.

The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of ordinary calculus) \displaystyle \lim _{\Delta t\to 0}F_{\Delta t} is zero everywhere but a single point, where it is infinite. To make proper sense of the delta function, we should instead insist that the property

\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,

which holds for all \displaystyle \Delta t>0 , should continue to hold in the limit. So, in the equation \displaystyle F(t)=P\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t) , it is understood that the limit is always taken outside the integral.

In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.

Despite its name, the delta function is not truly a function, at least not a usual one with range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.

 

或許因為它具『偶函數』的性質

Properties

Scaling and symmetry

The delta function satisfies the following scaling property for a non-zero scalar α:[30]

\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}

and so

\displaystyle \delta (\alpha x)={\frac {\delta (x)}{|\alpha |}}.    

In particular, the delta function is an even distribution, in the sense that

\displaystyle \delta (-x)=\delta (x)

which is homogeneous of degree −1.

 

同時『連續函數』 f(x) 亦有

f(x) = \frac{f(x^{-}) + f(x^{+})}{2} 之理據,

故爾還是

\displaystyle \Delta (t)={\begin{cases}0,&t \le - \delta t \\\frac{1}{2 \delta t},& - \delta t < t < \delta t \\0,&t \ge \delta t \end{cases}}

\lim \limits_{\delta t \to 0} \Delta (t) = \delta (t)

以及

\displaystyle h (t)={\begin{cases}0,&t \le - \delta t \\ \frac{1}{2} + \frac{1}{2 \delta t} t,& - \delta t < t < \delta t \\1,&t \ge \delta t \end{cases}}

\lim \limits_{\delta t \to 0} \frac{d \ h(t)}{dt}  = \delta (t)

 

比較清晰明白的吧?!

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧D‧前

微積分基本定理描述了微積分的兩個主要運算──微分積分之間的關係。

定理的第一部分,稱為微積分第一基本定理,表明不定積分是微分的逆運算。這一部分定理的重要之處在於它保證了某連續函數原函數的存在性。

定理的第二部分,稱為微積分第二基本定理或「牛頓-萊布尼茨公式」,表明定積分可以用無窮多個原函數的任意一個來計算。這一部分有很多實際應用,這是因為它大大簡化了定積分的計算。[1]

該定理的一個特殊形式,首先由詹姆斯·格里高利(1638-1675)證明和出版。[2]定理的一般形式,則由艾薩克·巴羅完成證明。

微積分基本定理表明,一個變量在一段時間之內的無窮小變化之和,等於該變量的淨變化。

我們從一個例子開始。假設有一個物體在直線上運動,其位置為 x(t),其中 為時間, x(t) 意味著 是 的函數。這個函數的導數等於位置的無窮小變化 d除以時間的無窮小變化 d(當然,該導數本身也與時間有關)。我們把速度定義為位置的變化除以時間的變化。用萊布尼茲記法

\displaystyle {\frac {dx}{dt}}=v(t).

整理,得

\displaystyle dx=v(t)\,dt.

根據以上的推理, \displaystyle x 的變化── \displaystyle \Delta x ,是 \displaystyle dx 的無窮小變化之和。它也等於導數和時間的無窮小乘積之和。這個無窮的和,就是積分;所以,一個函數求導之後再積分,得到的就是原來的函數。我們可以合理地推斷,這個運算反過來也成立,積分之後再求導,得到的也是原來的函數 。

歷史

詹姆斯·格里高利首先發表了該定理基本形式的幾何證明[3][4][5]艾薩克·巴羅證明了該定理的一般形式[6] 。巴羅的學生牛頓使微積分的相關理論得以完善。萊布尼茨使得相關理論實現體系化並引入了沿用至今微積分符號,

正式表述

微積分基本定理有兩個部分,第一部分是關於原函數的導數,第二部分描述了原函數和定積分之間的關係。

第一部分 / 第一基本定理

\displaystyle a,b\in \mathbb {R},設 \displaystyle f:[a,b]\longrightarrow \mathbb {R} 為黎曼可積的函數,定義

\displaystyle F:{\begin{cases}[a,b]&\longrightarrow \mathbb {R} \\x&\longmapsto \int _{a}^{x}\!f(t)\,dt\end{cases}}

如果 在 [a,b] 連續,則

第二部分 / 第二基本定理

\displaystyle a,b\in \mathbb {R} \quad a<b ,設 \displaystyle f,F:[a,b]\longrightarrow \mathbb {R} ,滿足

那麼,若 f  黎曼可積(例如   連續),則我們有

\displaystyle \int _{a}^{b}\,f(t)\,dt\,=F(b)-F(a)

Формула Ньютона-Лейбница (анимация)

─── 摘自維基百科《微積分基本定理》詞條

 

從泛函分析的角度來說,微積分是研究兩個線性算子:微分算子 \displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}} 和不定積分算子 \displaystyle \int _{0}^{t}

 

我們知道『微分』 \frac{d}{dt} 是『線性算子』︰

\frac{d}{dt} \left[ a \cdot f(t) + b \cdot g(t) \right] = a \cdot \frac{d \ f(t)}{dt} + b \cdot \frac{d \ g(t)}{dt}

也知道『積分』是『線性算子』︰

\int \limits_{0}^{t} \left[ a \cdot f(\tau) + b \cdot g(\tau) \right] \ d{\tau} = a \cdot \int\limits_{0}^{t} f(\tau) \ d{\tau} + b \cdot \int\limits_{0}^{t} g(\tau) \ d{\tau}

 

甚至將它稱之為『反導數』︰

Antiderivative

In calculus, an antiderivative, primitive function, primitive integral or indefinite integral[Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F = f.[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration) and its opposite operation is called differentiation, which is the process of finding a derivative.

Antiderivatives are related to definite integrals through the fundamental theorem of calculus: the definite integral of a function over an interval is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.

The discrete equivalent of the notion of antiderivative is antidifference.

 

然而『微積分基本定理』表明 \displaystyle \int _{0}^{t} \neq {\frac {\mathrm {d} }{\mathrm {d} t}}^{-1} 呦!

舉例來說︰

假設 \frac{d \ y(t)}{dt} = x(t) ,而且 y(0) = y_0 ,那麼

y(t) = y_0 + \int_{0}^{t} x(\tau}) \ d{\tau}

故知『形式推演』

\hat{L} \ y(t) = x(t) \ \Longrightarrow y(t) = \hat{L^{-1}} \ x(t)

能不嚴謹乎?

這個 y_0 就是所謂的『初始條件』 initial condition 也!!

或許正因

y_1(t) = y_0 + \int_{0}^{t} x_1 (\tau}) \ d{\tau}

y_2(t) = y_0 + \int_{0}^{t} x_2 (\tau}) \ d{\tau}

 

\therefore y_1(t) - y_2(t) = \int_{0}^{t} \left[ x_1 (\tau}) - x_2(\tau) \right] \ d{\tau}

易產生誤解耶??

所以方才強調 ZSR 與 ZIR 之區別哩!!??

Zero state response

In electrical circuit theory, the zero state response (ZSR), also known as the forced response is the behavior or response of a circuit with initial state of zero. The ZSR results only from the external inputs or driving functions of the circuit and not from the initial state. The ZSR is also called the forced or driven response of the circuit.

The total response of the circuit is the superposition of the ZSR and the ZIR, or Zero Input Response. The ZIR results only from the initial state of the circuit and not from any external drive. The ZIR is also called the natural response, and the resonant frequencies of the ZIR are called the natural frequencies. Given a description of a system in the s-domain, the zero-state response can be described as Y(s)=Init(s)/a(s) where a(s) and Init(s) are system-specific.

 

其實『初始值』 IVP 

Initial value problem

In mathematics, the field of differential equations, an initial value problem (also called the Cauchy problem by some authors) is an ordinary differential equation together with a specified value, called the initial condition, of the unknown function at a given point in the domain of the solution. In physics or other sciences, modeling a system frequently amounts to solving an initial value problem; in this context, the differential initial value is an equation that is an evolution equation specifying how, given initial conditions, the system will evolve with time.

Definition

An initial value problem is a differential equation

displaystyle f\colon \Omega \subset \mathbb {R} \times \mathbb {R} ^{n}\to \mathbb {R} ^{n} where \displaystyle \Omega is an open set of \displaystyle \mathbb {R} \times \mathbb {R} ^{n} ,

together with a point in the domain of \displaystyle f

\displaystyle (t_{0},y_{0})\in \Omega ,

called the initial condition.

A solution to an initial value problem is a function y{\displaystyle y}y that is a solution to the differential equation and satisfies

\displaystyle y(t_{0})=y_{0}.

In higher dimensions, the differential equation is replaced with a family of equations \displaystyle y_{i}'(t)=f_{i}(t,y_{1}(t),y_{2}(t),\dotsc ) , and \displaystyle y(t) is viewed as the vector \displaystyle (y_{1}(t),\dotsc ,y_{n}(t)) . More generally, the unknown function \displaystyle y can take values on infinite dimensional spaces, such as Banach spaces or spaces of distributions.

Initial value problems are extended to higher orders by treating the derivatives in the same way as an independent function, e.g. \displaystyle y''(t)=f(t,y(t),y'(t)) .

Existence and uniqueness of solutions

For a large class of initial value problems, the existence and uniqueness of a solution can be illustrated through the use of a calculator.

The Picard–Lindelöf theorem guarantees a unique solution on some interval containing t0 if ƒ is continuous on a region containing t0 and y0 and satisfies the Lipschitz condition on the variable y. The proof of this theorem proceeds by reformulating the problem as an equivalent integral equation. The integral can be considered an operator which maps one function into another, such that the solution is a fixed point of the operator. The Banach fixed point theorem is then invoked to show that there exists a unique fixed point, which is the solution of the initial value problem.

An older proof of the Picard–Lindelöf theorem constructs a sequence of functions which converge to the solution of the integral equation, and thus, the solution of the initial value problem. Such a construction is sometimes called “Picard’s method” or “the method of successive approximations”. This version is essentially a special case of the Banach fixed point theorem.

Hiroshi Okamura obtained a necessary and sufficient condition for the solution of an initial value problem to be unique. This condition has to do with the existence of a Lyapunov function for the system.

In some situations, the function ƒ is not of class C1, or even Lipschitz, so the usual result guaranteeing the local existence of a unique solution does not apply. The Peano existence theorem however proves that even for ƒ merely continuous, solutions are guaranteed to exist locally in time; the problem is that there is no guarantee of uniqueness. The result may be found in Coddington & Levinson (1955, Theorem 1.3) or Robinson (2001, Theorem 2.6). An even more general result is the Carathéodory existence theorem, which proves existence for some discontinuous functions ƒ.

 

問題古早勒,來自於『牛頓第二運動定律』

\vec{F}(t) = m \cdot \vec{a} (t) = m \cdot \frac{d^2}{{dt}^2} \vec{r} (t)

,需要確定 \vec{r} (0) 以及 \vec{v} (0) = \frac{d}{dt} \vec{r}  (0) 構成之『運動狀態』呢??!!

如是當知

Laplace Transform

Introduction

The definition of the Laplace Transform that we will use is called a “one-sided” (or unilateral) Laplace Transform and is given by:

The Laplace Transform seems, at first, to be a fairly abstract and esoteric concept.  In practice, it allows one to (more) easily solve a huge variety of problems that involve linear systems, particularly differential equations.  It allows for compact representation of systems (via the “Transfer Function”), it simplifies evaluation of the convolution integral, and it turns problems involving differential equations into algebraic problems.  As indicated by the quotes in the animation above (from some students at Swarthmore College), it almost magically simplifies problems that otherwise are very difficult to solve.

There are a few things to note about the Laplace Transform.

  • The function f(t), which is a function of time, is transformed to a function F(s).  The function F(s) is a function of the Laplace variable, “s.”  We call this a Laplace domain function.  So the Laplace Transform takes a time domain function, f(t), and converts it into a Laplace domain function, F(s).
  • We use a lowercase letter for the function in the time domain, and un uppercase letter in the Laplace domain.
  • We say that F(s) is the Laplace Transform of f(t),

    or that f(t) is the inverse Laplace Transform of F(s),

    or that f(t) and F(s) are a Laplace Transform pair,
  • For our purposes the time variable, t, and time domain functions will always be real-valued.  The Laplace variable, s, and Laplace domain functions are complex.
  • Since the integral goes from 0 to ∞, the time variable, t, must not occur in the Laplace domain result (if it does, you made a mistake).  Note that none of the Laplace Transforms in the table have the time variable, t, in them.
  • The lower limit on the integral is written as 0.  This indicates that the lower limit of the integral is from just before t=0 (t=0 indicates an infinitesimally small time before zero).  This is a fine point, but you will see that it is very important in two respects:
    • It lets us deal with the impulse function, δ(t).  If you don’t know anything about the impulse function yet, don’t worry, we’ll discuss it in some detail later.
    • It lets us consider the initial conditions of a system at t=0.   These are often much simpler to find than the initial conditions at t=0+ (which are needed by some other techniques used to solve differential equations).
  • Since the lower limit is zero, we will only be interested in the behavior of functions (and systems) for t≥0.
  • You will sometimes see discussed the “two-sided” (or bilateral) transform (with the lower limit written as -∞) or a one-sided transform with the lower limit written as 0+.  We will not use these forms and will not discuss them further.
  • Since the upper limit of the integral is ∞, we must ask ourselves if the Laplace Transform, F(s), even exists.  It turns out that the transform exists as long as f(t) doesn’t grow faster than an exponential function.  This includes all functions of interest to us, so we will not concern ourselves with existence.

……

© Copyright 2005 to 2015 Erik Cheever    This page may be freely used for educational purposes.

 

為何在意『當下』 0 之『前 0^{-}‧後 0^{+} 』的乎☆★