STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 III‧阻抗‧C

學習亦有阻抗乎?俗話說︰

學如逆水行舟,不進則退。

是其阻抗也!然則並非『阻礙之大者』也!?

所以者何?

欲進不得之時,進退維谷之際

不知如之何處之,因而深感難為也?!

故此假借簡單的

RL circuit

A resistor–inductor circuit (RL circuit), or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor and is the simplest type of RL circuit.

A first order RL circuit is one of the simplest analogue infinite impulse response electronic filters. It consists of a resistor and an inductor, either in series driven by a voltage source or in parallel driven by a current source.

Introduction

The fundamental passive linear circuit elements are the resistor (R), capacitor (C) and inductor (L). These circuit elements can be combined to form an electrical circuit in four distinct ways: the RC circuit, the RL circuit, the LC circuit and the RLC circuit with the abbreviations indicating which components are used. These circuits exhibit important types of behaviour that are fundamental to analogue electronics. In particular, they are able to act as passive filters. This article considers the RL circuit in both series and parallel as shown in the diagrams.

In practice, however, capacitors (and RC circuits) are usually preferred to inductors since they can be more easily manufactured and are generally physically smaller, particularly for higher values of components.

Both RC and RL circuits form a single-pole filter. Depending on whether the reactive element (C or L) is in series with the load, or parallel with the load will dictate whether the filter is low-pass or high-pass.

Frequently RL circuits are used for DC power supplies to RF amplifiers, where the inductor is used to pass DC bias current and block the RF getting back into the power supply.

This article relies on knowledge of the complex impedance representation of inductors and on knowledge of the frequency domain representation of signals.

 

,僅及串聯電路,竟有『如是長篇』說法︰

勇猛精進者,當練習

Series circuit

Series RL circuit

By viewing the circuit as a voltage divider, we see that the voltage across the inductor is:

\displaystyle V_{L}(s)={\frac {Ls}{R+Ls}}V_{\mathrm {in} }(s)\,,

and the voltage across the resistor is:

\displaystyle V_{R}(s)={\frac {R}{R+Ls}}V_{\mathrm {in} }(s)\,.

Current

The current in the circuit is the same everywhere since the circuit is in series:

\displaystyle I(s)={\frac {V_{\mathrm {in} }(s)}{R+Ls}}\,.

Transfer functions

The transfer function to the inductor voltage is

\displaystyle H_{L}(s)={\frac {V_{L}(s)}{V_{\mathrm {in} }(s)}}={\frac {Ls}{R+Ls}}=G_{L}e^{j\phi _{L}}\,.

Similarly, the transfer function, to the resistor voltage is

\displaystyle H_{R}(s)={\frac {V_{R}(s)}{V_{\mathrm {in} }(s)}}={\frac {R}{R+Ls}}=G_{R}e^{j\phi _{R}}\,.

The transfer function, to the current, is

\displaystyle H_{I}(s)={\frac {I(s)}{V_{\mathrm {in} }(s)}}={\frac {1}{R+Ls}}\,.

Poles and zeros

The transfer functions have a single pole located at

\displaystyle s=-{\frac {R}{L}}\,.

In addition, the transfer function for the inductor has a zero located at the origin.

Gain and phase angle

The gains across the two components are found by taking the magnitudes of the above expressions:

\displaystyle G_{L}={\big |}H_{L}(\omega ){\big |}=\left|{\frac {V_{L}(\omega )}{V_{\mathrm {in} }(\omega )}}\right|={\frac {\omega L}{\sqrt {R^{2}+\left(\omega L\right)^{2}}}}

and

\displaystyle G_{R}={\big |}H_{R}(\omega ){\big |}=\left|{\frac {V_{R}(\omega )}{V_{\mathrm {in} }(\omega )}}\right|={\frac {R}{\sqrt {R^{2}+\left(\omega L\right)^{2}}}}\,,

and the phase angles are:

\displaystyle \phi _{L}=\angle H_{L}(s)=\tan ^{-1}\left({\frac {R}{\omega L}}\right)

and

\displaystyle \phi _{R}=\angle H_{R}(s)=\tan ^{-1}\left(-{\frac {\omega L}{R}}\right)\,.

Phasor notation

These expressions together may be substituted into the usual expression for the phasor representing the output:

\displaystyle {\begin{aligned}V_{L}&=G_{L}V_{\mathrm {in} }e^{j\phi _{L}}\\V_{R}&=G_{R}V_{\mathrm {in} }e^{j\phi _{R}}\end{aligned}}

Impulse response

The impulse response for each voltage is the inverse Laplace transform of the corresponding transfer function. It represents the response of the circuit to an input voltage consisting of an impulse or Dirac delta function.

The impulse response for the inductor voltage is

\displaystyle h_{L}(t)=\delta (t)-{\frac {R}{L}}e^{-t{\frac {R}{L}}}u(t)=\delta (t)-{\frac {1}{\tau }}e^{-{\frac {t}{\tau }}}u(t)\,,

where u(t) is the Heaviside step function and τ = L/R is the time constant.

Similarly, the impulse response for the resistor voltage is

\displaystyle h_{R}(t)={\frac {R}{L}}e^{-t{\frac {R}{L}}}u(t)={\frac {1}{\tau }}e^{-{\frac {t}{\tau }}}u(t)\,.

Zero-input response

The zero-input response (ZIR), also called the natural response, of an RL circuit describes the behavior of the circuit after it has reached constant voltages and currents and is disconnected from any power source. It is called the zero-input response because it requires no input.

The ZIR of an RL circuit is:

\displaystyle I(t)=I(0)e^{-{\frac {R}{L}}t}=I(0)e^{-{\frac {t}{\tau }}}\,.

Frequency domain considerations

These are frequency domain expressions. Analysis of them will show which frequencies the circuits (or filters) pass and reject. This analysis rests on a consideration of what happens to these gains as the frequency becomes very large and very small.

As ω → ∞:

\displaystyle G_{L}\to 1\quad {\mbox{and}}\quad G_{R}\to 0\,.

As ω → 0:

\displaystyle G_{L}\to 0\quad {\mbox{and}}\quad G_{R}\to 1\,.

This shows that, if the output is taken across the inductor, high frequencies are passed and low frequencies are attenuated (rejected). Thus, the circuit behaves as a high-pass filter. If, though, the output is taken across the resistor, high frequencies are rejected and low frequencies are passed. In this configuration, the circuit behaves as a low-pass filter. Compare this with the behaviour of the resistor output in an RC circuit, where the reverse is the case.

The range of frequencies that the filter passes is called its bandwidth. The point at which the filter attenuates the signal to half its unfiltered power is termed its cutoff frequency. This requires that the gain of the circuit be reduced to

\displaystyle G_{L}=G_{R}={\frac {1}{\sqrt {2}}}\,.

Solving the above equation yields

\displaystyle \omega _{\mathrm {c} }={\frac {R}{L}}{\mbox{ rad/s}}\quad {\mbox{or}}\quad f_{\mathrm {c} }={\frac {R}{2\pi L}}{\mbox{ Hz}}\,,

which is the frequency that the filter will attenuate to half its original power.

Clearly, the phases also depend on frequency, although this effect is less interesting generally than the gain variations.

As ω → 0:

\displaystyle \phi _{L}\to 90^{\circ }={\frac {\pi }{2}}{\mbox{ radians}}\quad {\mbox{and}}\quad \phi _{R}\to 0\,.

As ω → ∞:

\displaystyle \phi _{L}\to 0\quad {\mbox{and}}\quad \phi _{R}\to -90^{\circ }=-{\frac {\pi }{2}}{\mbox{ radians}}\,.

So at DC (0 Hz), the resistor voltage is in phase with the signal voltage while the inductor voltage leads it by 90°. As frequency increases, the resistor voltage comes to have a 90° lag relative to the signal and the inductor voltage comes to be in-phase with the signal.

Time domain considerations

The most straightforward way to derive the time domain behaviour is to use the Laplace transforms of the expressions for VL and VR given above. This effectively transforms s. Assuming a step input (i.e., Vin = 0 before t = 0 and then Vin = V afterwards):

\displaystyle {\begin{aligned}V_{\mathrm {in} }(s)&=V\cdot {\frac {1}{s}}\\V_{L}(s)&=V\cdot {\frac {sL}{R+sL}}\cdot {\frac {1}{s}}\\V_{R}(s)&=V\cdot {\frac {R}{R+sL}}\cdot {\frac {1}{s}}\,.\end{aligned}}

Inductor voltage step-response.

Resistor voltage step-response.

Partial fractions expansions and the inverse Laplace transform yield:

\displaystyle {\begin{aligned}V_{L}(t)&=Ve^{-t{\frac {R}{L}}}\\V_{R}(t)&=V\left(1-e^{-t{\frac {R}{L}}}\right)\,.\end{aligned}}

Thus, the voltage across the inductor tends towards 0 as time passes, while the voltage across the resistor tends towards V, as shown in the figures. This is in keeping with the intuitive point that the inductor will only have a voltage across as long as the current in the circuit is changing — as the circuit reaches its steady-state, there is no further current change and ultimately no inductor voltage.

These equations show that a series RL circuit has a time constant, usually denoted τ = L/R being the time it takes the voltage across the component to either fall (across the inductor) or rise (across the resistor) to within 1/e of its final value. That is, τ is the time it takes VL to reach V(1/e) and VR to reach V(1 − 1/e).

The rate of change is a fractional 1 − 1/e per τ. Thus, in going from t = to t = (N + 1)τ, the voltage will have moved about 63% of the way from its level at t = toward its final value. So the voltage across the inductor will have dropped to about 37% after τ, and essentially to zero (0.7%) after about 5τ. Kirchhoff’s voltage law implies that the voltage across the resistor will rise at the same rate. When the voltage source is then replaced with a short-circuit, the voltage across the resistor drops exponentially with t from V towards 0. The resistor will be discharged to about 37% after τ, and essentially fully discharged (0.7%) after about 5τ. Note that the current, I, in the circuit behaves as the voltage across the resistor does, via Ohm’s Law.

The delay in the rise or fall time of the circuit is in this case caused by the back-EMF from the inductor which, as the current flowing through it tries to change, prevents the current (and hence the voltage across the resistor) from rising or falling much faster than the time-constant of the circuit. Since all wires have some self-inductance and resistance, all circuits have a time constant. As a result, when the power supply is switched on, the current does not instantaneously reach its steady-state value, V/R. The rise instead takes several time-constants to complete. If this were not the case, and the current were to reach steady-state immediately, extremely strong inductive electric fields would be generated by the sharp change in the magnetic field — this would lead to breakdown of the air in the circuit and electric arcing, probably damaging components (and users).

These results may also be derived by solving the differential equation describing the circuit:

\displaystyle {\begin{aligned}V_{\mathrm {in} }&=IR+L{\frac {dI}{dt}}\\V_{R}&=V_{\mathrm {in} }-V_{L}\,.\end{aligned}}

The first equation is solved by using an integrating factor and yields the current which must be differentiated to give VL; the second equation is straightforward. The solutions are exactly the same as those obtained via Laplace transforms.

Short circuit equation

For short circuit evaluation, RL circuit is considered. The more general equation is:

\displaystyle v_{in}(t)=v_{L}(t)+v_{R}(t)=L{\frac {di}{dt}}+Ri

With initial condition:

\displaystyle i(0)=i_{0}

Which can be solved by Laplace transform:

\displaystyle V_{in}(s)=sLI-Li_{0}+RI

Thus:

\displaystyle I(s)={\frac {Li_{o}+V_{in}}{sL+R}}

Then antitransform returns:

\displaystyle i(t)=i_{0}e^{-{\frac {R}{L}}t}+{\mathcal {L}}^{-1}\left[{\frac {V_{in}}{sL+R}}\right]

In case the source voltage is a Heaviside step function (DC):

\displaystyle v_{in}(t)=Eu(t)

Returns:

\displaystyle i(t)=i_{0}e^{-{\frac {R}{L}}t}+{\mathcal {L}}^{-1}\left[{\frac {E}{s(sL+R)}}\right]=i_{0}e^{-{\frac {R}{L}}t}+{\frac {E}{R}}\left(1-e^{-{\frac {R}{L}}t}\right)

In case the source voltage is a sinusoidal function (AC):

\displaystyle v_{in}(t)=E\sin(\omega t)\Rightarrow V_{in}(s)={\frac {E\omega }{s^{2}+\omega ^{2}}}

Returns:

\displaystyle i(t)=i_{0}e^{-{\frac {R}{L}}t}+{\mathcal {L}}^{-1}\left[{\frac {E\omega }{(s^{2}+\omega ^{2})(sL+R)}}\right]=i_{0}e^{-{\frac {R}{L}}t}+{\mathcal {L}}^{-1}\left[{\frac {E\omega }{2j\omega }}\left({\frac {1}{s-j\omega }}-{\frac {1}{s+j\omega }}\right){\frac {1}{(sL+R)}}\right]
\displaystyle =i_{0}e^{-{\frac {R}{L}}t}+{\frac {E}{2jL}}{\mathcal {L}}^{-1}\left[{\frac {1}{s+{\frac {R}{L}}}}\left({\frac {1}{{\frac {R}{L}}-j\omega }}-{\frac {1}{{\frac {R}{L}}+j\omega }}\right)+{\frac {1}{s-j\omega }}{\frac {1}{{\frac {R}{L}}+j\omega }}-{\frac {1}{s+j\omega }}{\frac {1}{{\frac {R}{L}}-j\omega }}\right]
\displaystyle =i_{0}e^{-{\frac {R}{L}}t}+{\frac {E}{2jL}}e^{-{\frac {R}{L}}t}2j{\text{Im}}\left[{\frac {1}{{\frac {R}{L}}-j\omega }}\right]+{\frac {E}{2jL}}2j{\text{Im}}\left[e^{j\omega t}{\frac {1}{{\frac {R}{L}}+j\omega }}\right]
\displaystyle =i_{0}e^{-{\frac {R}{L}}t}+{\frac {E\omega }{L\left(\left({\frac {R}{L}}\right)^{2}+\omega ^{2}\right)}}e^{-{\frac {R}{L}}t}+{\frac {E}{L\left(\left({\frac {R}{L}}\right)^{2}+\omega ^{2}\right)}}\left({\frac {R}{L}}\sin(\omega t)-\omega \cos(\omega t)\right)
\displaystyle =i_{0}e^{-{\frac {R}{L}}t}+{\frac {E\omega }{L\left(\left({\frac {R}{L}}\right)^{2}+\omega ^{2}\right)}}e^{-{\frac {R}{L}}t}+{\frac {E}{L{\sqrt {\left({\frac {R}{L}}\right)^{2}+\omega ^{2}}}}}\sin \left(\omega t-\tan ^{-1}\left({\frac {\omega L}{R}}\right)\right)

 

請善用工具

 

以度坎險呦☆☆

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 III‧阻抗‧B

十二因緣

緣起經》玄奘譯

佛言,云何名緣起初義?謂:依此有故彼有,此生故彼生。所謂:無明名色名色六處六處老死,起愁、歎、苦、憂、惱,是名為純大苦蘊集,如是名為緣起初義。

 

邏輯學』上說『有 □ 則有 ○ ,無 ○ 則無 □』,既已『有 □ 』又想『無 ○ 』,哪裡能夠不矛盾的啊!過去魏晉時『王弼』講︰一,數之始而物之極也。謂之為妙有者,欲言有,不見其形,則非有,故謂之;欲言其無,物由之以生,則非無,故謂之也。斯乃無中之有,謂之妙有也。

假使用『恆等式1 - x^n = (1 - x)(1 + x + \cdots + x^{n-1})

來計算 \frac{1 + x + \cdots + x^{m-1}}{1 + x + \cdots + x^{n-1}}

將等於 \frac{1 - x^m}{1 - x^n} = (1 - x^m) \left[1 + (x^n) + { (x^n) }^2 + { (x^n) } ^3 + \cdots \right] = 1 - x^m + x^n - x^{n+m} + x^{2n} - \cdots

那麼 1 - 1 + 1 - 1 + \cdots 難道不應該『等於\frac{m}{n} 的嗎?

一七四三年時,『伯努利』正因此而反對『歐拉』所講的『可加性』說法,『』一個級數怎麼可能有『不同』的『』的呢??作者不知如果在太空裡,乘坐著『加速度』是 g 的太空船,在上面用著『樹莓派』控制的『奈米手』來擲『骰子』,是否一定能得到『相同點數』呢?難道說『牛頓力學』不是只要『初始態』是『相同』的話,那個『骰子』的『軌跡』必然就是『一樣』的嗎 ??據聞,法國義大利裔大數學家『約瑟夫‧拉格朗日』伯爵 Joseph Lagrange 倒是有個『說法』︰事實上,對於『不同』的 m,n 來講, 從『幂級數』來看,那個 = 1 - x^m + x^n - x^{n+m} + x^{2n} - \cdots 是有『零的間隙』的 1 + 0 + 0 + \cdots - 1 + 0 + 0 + \cdots,這就與 1 - 1 + 1 - 1 + \cdots形式』上『不同』,我們怎麼能『先驗』的『期望』結果會是『相同』的呢!!

假使我們將『幾何級數1 + z + z^2 + \cdots + z^n + \cdots = \frac{1}{1 - z} ,擺放到『複數平面』之『單位圓』上來『研究』,輔之以『歐拉公式z = e^{i \theta} = \cos \theta + i\sin \theta,或許可以略探『可加性』理論的『意指』。當 0 < \theta < 2 \pi 時,\cos \theta \neq 1 ,雖然 |e^{i \theta}| = 1,我們假設那個『幾何級數』會收斂,於是得到 1 + e^{i \theta} + e^{2i \theta} + \cdots = \frac{1}{1 - e^{i \theta}} = \frac{1}{2} + \frac{1}{2} i \cot \frac{\theta}{2},所以 \frac{1}{2} + \cos{\theta} + \cos{2\theta} + \cos{3\theta} + \cdots = 0 以及 \sin{\theta} + \sin{2\theta} + \sin{3\theta} + \cdots = \frac{1}{2} \cot \frac{\theta}{2}。如果我們用 \theta = \phi + \pi 來『代換』,此時 -\pi < \phi < \pi,可以得到【一】 \frac{1}{2} - \cos{\phi} + \cos{2\phi} - \cos{3\phi} + \cdots = 0 和【二】 \sin{\phi} - \sin{2\phi} + \sin{3\phi} - \cdots = \frac{1}{2} \tan \frac{\phi}{2}。要是在【一】式中將 \phi 設為『』的話,我們依然會有 1 - 1 + 1 - 1 + \cdots = \frac{1}{2} ;要是驗之以【二】式,當 \phi = \frac{\pi}{2} 時,原式可以寫成 1 - 0  - 1 - 0 + 1 - 0 - 1 - 0 + \cdots = \frac{1}{2}。如此看來 s = 1 + z + z^2 + z^3 + \cdots  = 1 +z s 的『形式運算』,可能是有更深層的『關聯性』的吧!!

Circle-trig6.svg

複數平面之單位圓

300px-Unit_circle_angles_color.svg

220px-Periodic_sine

假使我們將【二】式對 \phi 作『逐項微分』得到 \cos{\phi} - 2\cos{2\phi} + 3\cos{3\phi} - \cdots = \frac{1}{4} \frac{1}{{(\cos \frac{\phi}{2})}^2},此時令 \phi = 0,就得到 1 - 2 + 3 - 4 + 5 - \cdots = \frac{1}{4}。如果把【一】式改寫成 \cos{\phi} - \cos{2\phi} + \cos{3\phi} - \cdots = \frac{1}{2} 然後對 \phi 作『逐項積分\int \limits_{0}^{\theta} ,並將變數 \theta 改回 \phi 後得到 \sin{\phi} - \frac{\sin{2\phi}}{2} + \frac{\sin{3\phi}}{3} - \cdots = \frac{\phi}{2};再做一次 作『逐項積分\int \limits_{0}^{\theta} ,且將變數 \theta 改回 \phi 後將得到 1 - \cos{\phi} - \frac{1 - \cos{2\phi}}{2^2} + \frac{1 - \cos{3\phi}}{3^2} - \cdots = \frac{\phi^2}{4},於是當 \phi = \pi 時,1 + \frac{1}{3^2} + \frac{1}{5^2} + \cdots = \frac{\pi^2}{8}。然而 1 + \frac{1}{3^2} + \frac{1}{5^2} + \cdots =  [1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2} + \cdots] - [\frac{1}{2^2} + \frac{1}{4^2} + \frac{1}{6^2} + \cdots] =[1 - \frac{1}{4}][1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2} + \cdots] ,如此我們就能得到了『巴塞爾問題』的答案 \sum \limits_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}。那麼

S= \ \ 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + \cdots
4S=\ \ \ \ \ \ 4 + \ \ \ \ \ 8 + \ \ \ \ \ 12 + \cdots 等於
-3S= 1 - 2 + 3 - 4 + 5 - 6 + \cdots = \frac{1}{4},所以 S = - \frac{1}{12}

但是這樣的作法果真是有『道理』的嗎?假使按造『級數的極限』 之『定義』,如果『部份和S_n = \sum \limits_{k=0}^{n} a_n 之『極限S = \lim \limits_{n \to \infty} S_n 存在, S 能不滿足 S = a_0 + a_1 + a_2 + a_3 + \cdots = a_0 + (S - a_0) 的嗎?或者可以是 \sum \limits_{n=0}^{\infty} k \cdot a_n \neq k \cdot S 的呢?即使又已知 S^{\prime} = \sum \limits_{n=0}^{\infty} b_n ,還是說可能會發生 \sum \limits_{n=0}^{\infty} a_n + b_n \neq S + S^{\prime} 的哩!若是說那些都不會發生,所謂的『可加性』的『概念』應當就可以看成『擴大』且包含『舊有』的『級數的極限』 的『觀點』的吧!也許我們應當使用別種『記號法』來『表達』它,以免像直接寫作 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + \cdots = - \frac{1}{12} 般的容易引起『誤解』,畢竟是也存在著多種『可加法』的啊!至於說那些『可加法』的『意義詮釋』,就看『使用者』的吧!!

─── 《【SONIC Π】電聲學之電路學《四》之《 V!》‧下

 

傳說過去有位哲學家講︰

牛頓『運動定律』 \vec{F} = m \cdot \vec{a} ,不過是『因果律』之案例耳。

如果以此立論,莫非

曾經自然不但忌真空,而後厭運動,所以阻抗生耶?

假使考察

\frac{Cause}{Effect} = Impedance

這個特殊形式,可以用來計算 \frac{\vec{F}}{\vec{a}} ,得到 m 乎?

要是

\frac{d}{dt} \vec{p} = \frac{d}{dt} m \cdot \vec{v} = \frac{d \ m}{dt} \cdot \vec{v} + m \cdot \frac{d \ \vec{v}}{dt} = \vec{F}

又該怎麼論呢??

倘若不知『電抗』意指︰

阻抗

阻抗electrical impedance)是電路電阻電感電容交流電的阻礙作用的統稱。阻抗是一個複數,實部稱為電阻,虛部稱為電抗;其中電容在電路中對交流電所起的阻礙作用稱為容抗,電感在電路中對交流電所起的阻礙作用稱為感抗,容抗和感抗合稱為電抗 阻抗將電阻的概念加以延伸至交流電路領域,不僅描述電壓與電流的相對振幅,也描述其相對相位。當通過電路的電流是直流電時 ,電阻與阻抗相等,電阻可以視為相位為零的阻抗。阻抗的概念不僅存在與電路中,在力學的振動系統中也有涉及。

阻抗通常以符號 \displaystyle Z 標記。阻抗是複數,可以用相量 \displaystyle Z_{m}\angle \theta 或 \displaystyle Z_{m}e^{j\theta } 來表示;其中, \displaystyle Z_{m} 是阻抗的大小, \displaystyle \theta 是阻抗的相位。這種表式法稱為「相量表示法」。

具體而言,阻抗定義為電壓與電流的頻域比率[1]。阻抗的大小 \displaystyle Z_{m} 是電壓振幅與電流振幅的絕對值比率,阻抗的相位 \displaystyle \theta 是電壓與電流的相位差。採用國際單位制,阻抗的單位是歐姆(Ω),與電阻的單位相同。阻抗的倒數導納,即電流與電壓的頻域比率。導納的單位是西門子 (單位)(舊單位是姆歐)。

英文術語「impedance」是由物理學者奧利弗·黑維塞於1886年發表論文《電工》給出[2][3]。於1893年,電機工程師亞瑟·肯乃利Arthur Kennelly)最先以複數表示阻抗[4]

 

實藉『相量』V, I, Z

v(t) = R \cdot i(t)

i(t) = C \cdot \frac{d}{dt} v(t)

v(t) = L \cdot \frac{d}{dt} i(t)

推廣為

\frac{V}{I} = Z 者,恐易輕忽

Resistance vs reactance

Resistance and reactance together determine the magnitude and phase of the impedance through the following relations:

\displaystyle {\begin{aligned}|Z|&={\sqrt {ZZ^{*}}}={\sqrt {R^{2}+X^{2}}}\\\theta &=\arctan {\left({\frac {X}{R}}\right)}\end{aligned}}

In many applications, the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant.

Resistance

 

Resistance \displaystyle \scriptstyle R is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current.

\displaystyle \ R=|Z|\cos {\theta }\quad

Reactance

 

Reactance \displaystyle \scriptstyle X is the imaginary part of the impedance; a component with a finite reactance induces a phase shift \displaystyle \scriptstyle \theta between the voltage across it and the current through it.

\displaystyle \ X=|Z|\sin {\theta }\quad

A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance will not dissipate any power.

Capacitive reactance

 

A capacitor has a purely reactive impedance which is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric.

\displaystyle X_{C}=-(\omega C)^{-1}=-(2\pi fC)^{-1}\quad

The minus sign indicates that the imaginary part of the impedance is negative.

At low frequencies, a capacitor approaches an open circuit so no current flows through it.

A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero.

Driven by an AC supply, a capacitor will only accumulate a limited amount of charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge will accumulate and the smaller the opposition to the current.

Inductive reactance

 

Inductive reactance \displaystyle \scriptstyle {X_{L}} is proportional to the signal frequency \displaystyle \scriptstyle {f} and the inductance \displaystyle \scriptstyle {L}\scriptstyle {L} .

\displaystyle X_{L}=\omega L=2\pi fL\quad

An inductor consists of a coiled conductor. Faraday’s law of electromagnetic induction gives the back emf \displaystyle \scriptstyle {\mathcal {E}} (voltage opposing current) due to a rate-of-change of magnetic flux density \displaystyle \scriptstyle {B} through a current loop.

\displaystyle {\mathcal {E}}=-{{d\Phi _{B}} \over dt}\quad

For an inductor consisting of a coil with \displaystyle N loops this gives.

\displaystyle {\mathcal {E}}=-N{d\Phi _{B} \over dt}\quad

The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.

Total reactance

The total reactance is given by

\displaystyle {X=X_{L}+X_{C}} (note that \displaystyle X_{C} is negative)

so that the total impedance is

\displaystyle \ Z=R+jX

 

文本裡的『the opposition to』敘述,怕難了

Variable impedance

In general, neither impedance nor admittance can be time varying as they are defined for complex exponentials for –∞ < t < +∞. If the complex exponential voltage–current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage–current ratios that appear to be linear time-invariant (LTI) for small signals over small observation windows; hence, they can be roughly described as having a time-varying impedance. That is, this description is an approximation; over large signal swings or observation windows, the voltage–current relationship is non-LTI and cannot be described by impedance.

 

言何事呦★

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 III‧阻抗‧A‧下

220px-Casimir_plates_bubbles.svg

220px-Casimir_plates.svg

220px--Water_wave_analogue_of_Casimir_effect.ogv

一九四八年時,荷蘭物理學家『亨德里克‧卡西米爾』 Hendrik Casimir 提出了『真空不空』的『議論』。因為依據『量子場論』,『真空』也得有『最低能階』,因此『真空能量』不論因不因其『實虛』粒子之『生滅』,總得有一個『量子態』。由於已知『原子』與『分子』的『主要結合力』是『電磁力』,那麼該『如何』說『真空』之『量化』與『物質』的『實際』是怎麽來『配合』的呢?因此他『計算』了這個『可能效應』之『大小』,然而無論是哪種『震盪』所引起的,他總是得要面臨『無窮共振態\langle E \rangle = \frac{1}{2} \sum \limits_{n}^{\infty} E_n 的『問題』,這也就是說『平均』有『多少』各種能量的『光子?』所參與 h\nu + 2h\nu + 3h\nu + \cdots 的『問題』?據知『卡西米爾』用『歐拉』等之『可加法』,得到了 {F_c \over A} = -\frac {\hbar c \pi^2} {240 a^4}

 

此處之『- 代表『吸引力』,而今早也已經『證實』的了,真不知『宇宙』是果真先就有『計畫』的嗎?還是說『人們』自己還在『幻想』的呢??

─── 《【SONIC Π】電聲學之電路學《四》之《 V!》‧下

 

也曾醉心於大自然之神奇!想當初聽聞『真空不空』之論時,仔細思考過借此能否邏輯歸結出『色不異空,空不異色』命題耶?終究因為『空』、『色』難定『確指』而作罷!!

反倒記起更早前好奇那『光』怎麼會是『電磁波』呢?它還可以借

偶極子

電磁學裏,有兩種偶極子(dipole):電偶極子是兩個分隔一段距離,電量相等,正負相反的電荷磁偶極子是一圈封閉循環的電流,例如一個有常定電流運行的線圈,稱為載流迴路。偶極子的性質可以用它的偶極矩描述。

電偶極矩(\displaystyle \mathbf {p} )由負電荷指向正電荷,大小等於正電荷量乘以正負電荷之間的距離。磁偶極矩 ( \displaystyle \mathbf {m} ) 的方向,根據右手法則,是大拇指從載流迴路的平面指出的方向,而其它手指則指向電流運行方向,磁偶極矩的大小等於電流乘以線圈面積。

除了載流迴路以外,電子和許多基本粒子都擁有磁偶極矩。它們都會產生磁場,與一個非常小的載流迴路產生的磁場完全相同。但是 ,現時大多數的科學觀點認為這個磁偶極矩是電子的自然性質,而非由載流迴路生成。

永久磁鐵的磁偶極矩來自於電子內稟的磁偶極矩。長條形的永久磁鐵稱為條形磁鐵,其兩端稱為指北極指南極,其磁偶極矩的方向是由指南極朝向指北極。這常規與地球的磁偶極矩恰巧相反:地球的磁偶極矩的方向是從地球的地磁北極指向地磁南極。地磁北極位於北極附近,實際上是指南極,會吸引磁鐵的指北極;而地磁南極位於南極附近,實際上是指北極,會吸引磁鐵的指南極。羅盤磁針的指北極會指向地磁北極;條形磁鐵可以當作羅盤使用,條形磁鐵的指北極會指向地磁北極。

根據當前的觀察結果,磁偶極子產生的機制只有兩種,載流迴路和量子力學自旋。科學家從未在實驗裏找到任何磁單極子存在的證據 。

地球磁場可以近似為一個磁偶極子的磁場。但是,圖內的 N 和 S 符號分別標示地球的地理北極地理南極。這標示法很容易引起困惑 。實際而言,地球的磁偶極矩的方向,是從地球位於地理北極附近的地磁北極,指向位於地理南極附近的地磁南極;而磁偶極子的方向則是從指南極指向指北極

電極偶子的等值線圖。等值曲面清楚地區分於圖內。

 

凌空『輻射』勒??

Dipole radiation

 

In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time. It is an extension, or a more physical next-step, to spherical wave radiation.

In particular, consider a harmonically oscillating electric dipole, with angular frequency ω and a dipole moment p0 along the direction of the form

\displaystyle \mathbf {p} (\mathbf {r} ,t)=\mathbf {p} (\mathbf {r} )e^{-i\omega t}=p_{0}{\hat {\mathbf {z} }}e^{-i\omega t}.

In vacuum, the exact field produced by this oscillating dipole can be derived using the retarded potential formulation as:

\displaystyle {\begin{aligned}\mathbf {E} &={\frac {1}{4\pi \varepsilon _{0}}}\left\{{\frac {\omega ^{2}}{c^{2}r}}\left({\hat {\mathbf {r} }}\times \mathbf {p} \right)\times {\hat {\mathbf {r} }}+\left({\frac {1}{r^{3}}}-{\frac {i\omega }{cr^{2}}}\right)\left(3{\hat {\mathbf {r} }}\left[{\hat {\mathbf {r} }}\cdot \mathbf {p} \right]-\mathbf {p} \right)\right\}e^{\frac {i\omega r}{c}}e^{-i\omega t}\\\mathbf {B} &={\frac {\omega ^{2}}{4\pi \varepsilon _{0}c^{3}}}({\hat {\mathbf {r} }}\times \mathbf {p} )\left(1-{\frac {c}{i\omega r}}\right){\frac {e^{i\omega r/c}}{r}}e^{-i\omega t}.\end{aligned}}

For /c ≫ 1, the far-field takes the simpler form of a radiating “spherical” wave, but with angular dependence embedded in the cross-product:[9]

\displaystyle {\begin{aligned}\mathbf {B} &={\frac {\omega ^{2}}{4\pi \varepsilon _{0}c^{3}}}({\hat {\mathbf {r} }}\times \mathbf {p} ){\frac {e^{i\omega (r/c-t)}}{r}}={\frac {\omega ^{2}\mu _{0}p_{0}}{4\pi c}}({\hat {\mathbf {r} }}\times {\hat {\mathbf {z} }}){\frac {e^{i\omega (r/c-t)}}{r}}=-{\frac {\omega ^{2}\mu _{0}p_{0}}{4\pi c}}\sin(\theta ){\frac {e^{i\omega (r/c-t)}}{r}}\mathbf {\hat {\phi }} \\\mathbf {E} &=c\mathbf {B} \times {\hat {\mathbf {r} }}=-{\frac {\omega ^{2}\mu _{0}p_{0}}{4\pi }}\sin(\theta )\left({\hat {\phi }}\times \mathbf {\hat {r}} \right){\frac {e^{i\omega (r/c-t)}}{r}}=-{\frac {\omega ^{2}\mu _{0}p_{0}}{4\pi }}\sin(\theta ){\frac {e^{i\omega (r/c-t)}}{r}}{\hat {\theta }}.\end{aligned}}

The time-averaged Poynting vector

\displaystyle \langle \mathbf {S} \rangle =\left({\frac {\mu _{0}p_{0}^{2}\omega ^{4}}{32\pi ^{2}c}}\right){\frac {\sin ^{2}(\theta )}{r^{2}}}\mathbf {\hat {r}}

is not distributed isotropically, but concentrated around the directions lying perpendicular to the dipole moment, as a result of the non-spherical electric and magnetic waves. In fact, the spherical harmonic function (sin θ) responsible for such toroidal angular distribution is precisely the l = 1 “p” wave.

The total time-average power radiated by the field can then be derived from the Poynting vector as

\displaystyle P={\frac {\mu _{0}\omega ^{4}p_{0}^{2}}{12\pi c}}.

Notice that the dependence of the power on the fourth power of the frequency of the radiation is in accordance with the Rayleigh scattering, and the underlying effects why the sky consists of mainly blue colour.

A circular polarized dipole is described as a superposition of two linear dipoles.

 

請問誰能先知道『真空竟然也有阻抗』哩☻?

Impedance of free space

The impedance of free space, Z0, is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is, Z0 = |E|/|H|, where |E| is the electric field strength and |H| is the magnetic field strength. It has an exactly defined value

\displaystyle Z_{0}=(119.916~983~2)\pi ~\Omega \approx 376.730~313~461~77\ldots ~\Omega .

The impedance of free space (more correctly, the wave impedance of a plane wave in free space) equals the product of the vacuum permeability μ0 and the speed of light in vacuum c0. Since the values of these constants are exact (they are given in the definitions of the ampere and the metre respectively), the value of the impedance of free space is likewise exact.

Terminology

The analogous quantity for a plane wave travelling through a dielectric medium is called the intrinsic impedance of the medium, and designated η (eta). Hence Z0 is sometimes referred to as the intrinsic impedance of free space,[1] and given the symbol η0.[2] It has numerous other synonyms, including:

  • wave impedance of free space,[3]
  • the vacuum impedance,[4]
  • intrinsic impedance of vacuum,[5]
  • characteristic impedance of vacuum,[6]
  • wave resistance of free space.[7]

Relation to other constants

From the above definition, and the plane wave solution to Maxwell’s equations,

\displaystyle Z_{0}={\frac {E}{H}}=\mu _{0}c_{0}={\sqrt {\frac {\mu _{0}}{\varepsilon _{0}}}}={\frac {1}{\varepsilon _{0}c_{0}}},

where

μ0 is the magnetic constant,
ε0 is the electric constant,
c0 is the speed of light in free space.[8][9]

The reciprocal of Z0 is sometimes referred to as the admittance of free space and represented by the symbol Y0.

Exact value

Since 1948, the definition of the SI unit ampere has relied upon choosing the numerical value of μ0 to be exactly 4π × 10−7 H/m. Similarly, since 1983 the SI metre has been defined relative to the second by choosing the value of c0 to be 299792458 m/s. Consequently,

\displaystyle Z_{0}=\mu _{0}c_{0}=119.916\,9832\,\pi ~\Omega exactly,

or

\displaystyle Z_{0}\approx 376.730\,313\,461\,77\ldots ~\Omega .

This chain of dependencies will change if the ampere is redefined in 2018. See New SI definitions.

 

然後明白立地於『事實』之『實驗』,才是真『科學精神』呦☺!

Wheatstone bridge

Wheatstone bridge circuit diagram. The unknown resistance Rx is to be measured; resistances R1, R2and R3 are known and R2 is adjustable. If the measured voltage VG is 0, then R2/R1Rx/R3.

A Wheatstone bridge is an electrical circuit used to measure an unknown electrical resistance by balancing two legs of a bridge circuit, one leg of which includes the unknown component. The primary benefit of the circuit is its ability to provide extremely accurate measurements (in contrast with something like a simple voltage divider).[1] Its operation is similar to the original potentiometer.

The Wheatstone bridge was invented by Samuel Hunter Christie in 1833 and improved and popularized by Sir Charles Wheatstone in 1843. One of the Wheatstone bridge’s initial uses was for the purpose of soils analysis and comparison.[2]

Operation

In the figure,  \displaystyle \scriptstyle R_{x} is the unknown resistance to be measured;  \displaystyle \scriptstyle R_{1}, \displaystyle \scriptstyle R_{2}, and \displaystyle \scriptstyle R_{3} are resistors of known resistance and the resistance of \displaystyle \scriptstyle R_{2} is adjustable. The resistance \displaystyle \scriptstyle R_{2} is adjusted until the bridge is “balanced” and no current flows through the galvanometer \displaystyle \scriptstyle V_{g} . At this point, the voltage between the two midpoints (B and D) will be zero. Therefore the ratio of the two resistances in the known leg \displaystyle \scriptstyle (R_{2}/R_{1}) is equal to the ratio of the two in the unknown leg \displaystyle \scriptstyle (R_{x}/R_{3}) . If the bridge is unbalanced, the direction of the current indicates whether \displaystyle \scriptstyle R_{2} is too high or too low.

At the point of balance,

\displaystyle {\begin{aligned}{\frac {R_{2}}{R_{1}}}&={\frac {R_{x}}{R_{3}}}\\[4pt]\Rightarrow R_{x}&={\frac {R_{2}}{R_{1}}}\cdot R_{3}\end{aligned}}

Detecting zero current with a galvanometer can be done to extremely high precision. Therefore, if \displaystyle \scriptstyle R_{1}, \displaystyle \scriptstyle R_{2}, and \displaystyle \scriptstyle R_{3} are known to high precision, then \displaystyle \scriptstyle R_{x} can be measured to high precision. Very small changes in \displaystyle \scriptstyle R_{x} disrupt the balance and are readily detected.
Alternatively, if \displaystyle \scriptstyle R_{1}, \displaystyle \scriptstyle R_{2}, and \displaystyle \scriptstyle R_{3} are known, but \displaystyle \scriptstyle R_{2} is not adjustable, the voltage difference across or current flow through the meter can be used to calculate the value of \displaystyle \scriptstyle R_{x}, using Kirchhoff’s circuit laws. This setup is frequently used in strain gauge and resistance thermometer measurements, as it is usually faster to read a voltage level off a meter than to adjust a resistance to zero the voltage.

Derivation

First, Kirchhoff’s first law is used to find the currents in junctions B and D:

\displaystyle {\begin{aligned}I_{3}-I_{x}+I_{G}&=0\\I_{1}-I_{2}-I_{G}&=0\end{aligned}}

Then, Kirchhoff’s second law is used for finding the voltage in the loops ABD and BCD:

\displaystyle {\begin{aligned}(I_{3}\cdot R_{3})-(I_{G}\cdot R_{G})-(I_{1}\cdot R_{1})&=0\\(I_{x}\cdot R_{x})-(I_{2}\cdot R_{2})+(I_{G}\cdot R_{G})&=0\end{aligned}}

When the bridge is balanced, then IG = 0, so the second set of equations can be rewritten as:

\displaystyle {\begin{aligned}I_{3}\cdot R_{3}&=I_{1}\cdot R_{1}-(1)\\I_{x}\cdot R_{x}&=I_{2}\cdot R_{2}-(2)\end{aligned}}

Then, the equations (1)&(2) are divided(=equation(1)/equation(2)) and rearranged, giving:

\displaystyle R_{x}={{R_{2}\cdot I_{2}\cdot I_{3}\cdot R_{3}} \over {R_{1}\cdot I_{1}\cdot I_{x}}}

From the first law, I3 = Ix and I1 = I2. The desired value of Rx is now known to be given as:

\displaystyle R_{x}={{R_{3}\cdot R_{2}} \over {R_{1}}}

If all four resistor values and the supply voltage (VS) are known, and the resistance of the galvanometer is high enough that IG is negligible, the voltage across the bridge (VG) can be found by working out the voltage from each potential divider and subtracting one from the other. The equation for this is:

\displaystyle V_{G}=\left({R_{2} \over {R_{1}+R_{2}}}-{R_{x} \over {R_{x}+R_{3}}}\right)V_{s}

where VG is the voltage of node D relative to node B.

Significance

The Wheatstone bridge illustrates the concept of a difference measurement, which can be extremely accurate. Variations on the Wheatstone bridge can be used to measure capacitance, inductance, impedance and other quantities, such as the amount of combustible gases in a sample, with an explosimeter. The Kelvin bridge was specially adapted from the Wheatstone bridge for measuring very low resistances. In many cases, the significance of measuring the unknown resistance is related to measuring the impact of somephysical phenomenon (such as force, temperature, pressure, etc.) which thereby allows the use of Wheatstone bridge in measuring those elements indirectly.

The concept was extended to alternating current measurements by James Clerk Maxwell in 1865 and further improved by Alan Blumlein around 1926.

Modifications of the fundamental bridge

The Wheatstone bridge is the fundamental bridge, but there are other modifications that can be made to measure various kinds of resistances when the fundamental Wheatstone bridge is not suitable. Some of the modifications are:

 

祇是那時又沒有『工具』︰

 

計算甚感無趣,今日且『補過』吧☆

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 III‧阻抗‧A

那麽要怎樣理解『複數z = x + i \ y 的呢?如果說『複數』起源於『方程式』的『求解』,比方說 x^2 + 1 = 0, \ x = \pm i,這定義了『i = \sqrt{-1}』,但是它的『意義』依然晦澀。即使說從『複數平面』的每一個『(x, y) 都對應著一個『複數z = x + i \ y 可能還是不清楚『i』的意思到底是什麼?假使再從『複數』的『加法上看』︰

假使 z_1 = x_1 + i \ y_1z_2 = x_2 + i \ y_2

那麼 z_1 + z_2 = (x_1 + x_2) + i \ (y_1 + y_2)

這是一種類似『向量』的加法,是否『i』的意義就藏在其中的呢?

positive_negative_rotation

imaginary_rotation

220px-90-Degree_Rotations_in_the_Complex_Plane

一九九八年美國新罕布希爾大學 University of New Hampshire 的
Paul J. Nahin 教授寫了一本『An Imaginary Tale: the Story of the Square Root of −1』的書,指出韋塞爾當初所講的『幾何意義』就是︰

i = \sqrt{-1} = 1 \ \angle 90^{\circ}

也就是說『i』就是『逆時鐘旋轉九十度』的『運算子』!

假使從複數的『極座標』表示法來看複數的『乘法』︰

假使 z_1 = r \cdot e^{i \ \theta}, \ z_2 = \alpha \cdot e^{i \ \beta},那麼 z_1 \cdot z_2 = \alpha \cdot r \cdot e^{i \ (\theta +\beta)}

就可以解釋成 Z1 『向量』被『逆時鐘旋轉』了『β』角度,它的『長度』被『縮放』了『α』倍!!

複數果真不是簡單的『』啊!也難怪它是『完備的』的喔!!

電子和工程領域中,常常會使用到『正弦』 Sin 信號,一般可以使用『相量』 Phasor 來作簡化分析。『相量』是一個『複數』,也是一種『向量』,通常使用『極座標』表示,舉例來說一個『振幅』是 A,『角頻率』是 \omega,初始『相位角』是 \theta 的『正弦信號』可以表示為 A \ e^{j \  (\omega t + \theta)},這裡的『j』就是『複數的 i』。為什麼又要改用 j = \sqrt{-1} 的呢?這是因為再『電子學』和『電路學』領域中 i 通常代表著『電流』, v 通常代表了『電壓』,因此為了避免『混淆』起見,所以才會『更名用  j』。

尤拉公式 Euler’s formula,是複數分析中的公式,它將三角函數與複數指數函數相關聯,對任意實數 x,都有

e^{j x} = \cos x + j \sin x

,它的重要性是不言而喻的啊!!

300px-Wykres_wektorowy_by_Zureks.svg

Unfasor

─── 《【SONIC Π】電聲學補充《二》

 

要是人們不知道文章的主旨,唸來彷彿鴨子聽雷,不過一大堆符號而已!此處先給個導引︰

‧ 兩個複數 z_1 = x_1 + j \cdot y_1 、 z_2 = x_2 + j \cdot y_2 相等,實部 \Re (z_1) = x_1 、 \Re (z_2) = x_2 與虛部 \Im (z_1) = y_1 、 \Im (z_2) = y_2 對應相等 \Re (z_1) = \Re (z_2), \ \Im (z_1) = \Im (z_2)

\Re\Im 都是線性算子︰

\Re(\alpha \cdot z_1 + \beta \cdot z_2) = \alpha \cdot \Re(z_1) + \beta \cdot \Re(z_2)

\Im(\alpha \cdot z_1 + \beta \cdot z_2) = \alpha \cdot \Im(z_1) + \beta \cdot \Im(z_2)

\Re 以及 \Im 相容於微分算子 \frac{d^n}{d t^n}

\frac{d^n}{d t^n}  \left[ \Re (x(t) + j \cdot y(t)) \right] = \frac{d^n}{d t^n} x(t) = \Re \left[ \frac{d^n}{d t^n} (x(t) + j \cdot y(t)) \right]

\frac{d^n}{d t^n}  \left[ \Im (x(t) + j \cdot y(t)) \right] = \frac{d^n}{d t^n} y(t) = \Im \left[ \frac{d^n}{d t^n} (x(t) + j \cdot y(t)) \right]

 

因此『複變正弦波』 A \ e^{j (\omega t + \theta)} = A e^{j \theta} \cdot e^{j \omega t} = \mathcal A \cdot e^{j \omega t} 將之用於『單頻』 \omega 交流 AC 系統,可將原本的微分方程式改寫成『相量』之代數方程式,簡化計算麻煩呦!!

 

再閱讀乎?

電路元件的阻抗

 

理想電阻器的阻抗 \displaystyle Z_{R} 是實數,稱為「電阻」:

\displaystyle Z_{R}=R 

其中,\displaystyle R 是理想電阻器的電阻。

理想電容器和理想電感器的阻抗 \displaystyle Z_{C} 都是虛數 :

\displaystyle Z_{C}={\frac {1}{j\omega C}} 
\displaystyle Z_{L}=j\omega L 

其中,\displaystyle C 是理想電容器的電容\displaystyle L 是理想電感器的電感

注意到以下兩個很有用的全等式:

\displaystyle j=e^{j\pi /2} 
\displaystyle -j=e^{-j\pi /2} 

應用這些全等式,理想電容器和理想電感器的阻抗以指數形式重寫為

\displaystyle Z_{C}={\frac {e^{-j\pi /2}}{\omega C}} 
\displaystyle Z_{L}=\omega Le^{j\pi /2} 

給定通過某阻抗元件的電流振幅,複阻抗的大小給出這阻抗元件兩端的電壓振幅,而複阻抗的指數因子則給出相位關係。

電阻器、電容器和電感器是三種基本電路元件。以下段落會推導出這些元件的阻抗。這些導引假定正弦信號。通過傅立葉分析,任意信號可以視為一組正弦函數的總和。所以,這些導引可以延伸至任意信號。

電阻器

根據歐姆定律,通過電阻器的含時電流 \displaystyle i_{R}(t) 與電阻器兩端的含時電壓 \displaystyle v_{R}(t) ,兩者之間的關係為

\displaystyle v_{R}(t)=i_{R}(t)R 

其中,\displaystyle t 是時間。

設定含時電壓信號為

\displaystyle v_{R}(t)=V_{0}\cos(\omega t)=V_{0}e^{j\omega t},\qquad V_{0}>0 

則含時電流為

\displaystyle i_{R}(t)={\frac {V_{0}}{R}}e^{j\omega t}

兩者的大小分別為 \displaystyle V_{0} 、\displaystyle V_{0}/R 。所以,阻抗為

\displaystyle Z_{R}=R 

電阻器的阻抗是實數。理想電阻器不會製造相位差。

電容器

通過電容器的含時電流 \displaystyle i_{C}(t) 與電容器兩端的含時電壓 \displaystyle v_{C}(t) ,兩者之間的關係為

\displaystyle i_{C}(t)=C{\frac {\operatorname {d} v_{C}(t)}{\operatorname {d} t}} 

設定含時電壓信號為

\displaystyle v_{C}(t)=V_{0}\sin(\omega t)=\operatorname {Re} \{V_{0}e^{j(\omega t-\pi /2)}\}=\operatorname {Re} \{V_{C}e^{j\omega t}\},\qquad V_{0}>0 

則電流為

\displaystyle i_{C}(t)=\omega V_{0}C\cos(\omega t)=\operatorname {Re} \{\omega V_{0}Ce^{j\omega t}\}=\operatorname {Re} \{I_{C}e^{j\omega t}\} 

兩者的除商為

\displaystyle {\frac {v_{C}(t)}{i_{C}(t)}}={\frac {V_{0}\sin(\omega t)}{\omega V_{0}C\cos(\omega t)}}={\frac {\sin(\omega t)}{\omega C\sin \left(\omega t+{\frac {\pi }{2}}\right)}} 

所以,電容器阻抗的大小為 \displaystyle 1/\omega C ,交流電壓滯後 90° 於交流電流 ,或者,交流電流超前 90° 於交流電壓。

相量形式表示,

\displaystyle V_{C}=V_{0}e^{j(-\pi /2)},\qquad V_{0}>0 
\displaystyle I_{C}=\omega V_{0}Ce^{j0} 
\displaystyle Z_{C}={\frac {e^{-j\pi /2}}{\omega C}} 

或者,應用歐拉公式,

\displaystyle Z_{C}={\frac {1}{j\omega C}} 

電感器

通過電感器的含時電流 \displaystyle i_{L}(t) 與電感器兩端的含時電壓 \displaystyle v_{L}(t) ,兩者之間的關係為

\displaystyle v_{L}(t)=L{\frac {\operatorname {d} i_{L}(t)}{\operatorname {d} t}} 

設定含時電流信號為

\displaystyle i_{L}(t)=I_{0}\cos(\omega t) 

則電壓為

\displaystyle v_{L}(t)=-\omega LI_{0}\sin(\omega t)=\omega LI_{0}\cos(\omega t+\pi /2) 

兩者的除商為

\displaystyle {\frac {v_{L}(t)}{i_{L}(t)}}={\frac {\omega L\cos(\omega t+\pi /2)}{\cos(\omega t)}} 

所以,電感器阻抗的大小為 \displaystyle \omega L ,交流電壓超前 90° 於交流電流,或者,交流電流滯後 90° 於交流電壓。

相量形式表示,

\displaystyle i_{L}(t)=I_{0}e^{j\omega t},\qquad I_{0}>0 
\displaystyle v_{L}(t)=\omega LI_{0}e^{j(\omega t+\pi /2)} 
\displaystyle Z_{L}=\omega Le^{j\pi /2} 

或者,應用歐拉公式,

\displaystyle Z_{L}=j\omega L 

廣義 s-平面阻抗

\displaystyle j\omega 定義阻抗的方法只能應用於以穩定態交流信號為輸入的電路。假若將阻抗概念加以延伸,將 \displaystyle j\omega 改換為複角頻率 \displaystyle s ,就可以應用於以任意交流信號為輸入的電路。表示於時域的信號,經過拉普拉斯變換後,會改為表示於頻域的信號,改成以複角頻率表示。採用這更廣義的標記,基本電路元件的阻抗為

元件 阻抗表達式
電阻器 \displaystyle R
電容器 \displaystyle 1/sC
電感器 \displaystyle sL

對於直流電路,這簡化為 \displaystyle s=0 ;對於穩定正弦交流信號,\displaystyle s=j\omega 。

 

當能明白『相量』祕密耶??

Phasor

In physics and engineering, a phasor (a portmanteau of phase vector[1][2]), is a complex number representing a sinusoidal function whose amplitude (A), angular frequency (ω), and initial phase (θ) are time-invariant. It is related to a more general concept called analytic representation,[3] which decomposes a sinusoid into the product of a complex constant and a factor that encapsulates the frequency and time dependence. The complex constant, which encapsulates amplitude and phase dependence, is known as phasor, complex amplitude,[4][5] and (in older texts) sinor[6] or even complexor.[6]

A common situation in electrical networks is the existence of multiple sinusoids all with the same frequency, but different amplitudes and phases. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be factored into the product of a linear combination of phasors (known as phasor arithmetic) and the time/frequency dependent factor that they all have in common.

The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible for vectors is possible for phasors as well.[6] An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis(calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain.[7][8] The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century.[9][10]

Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the Laplace transform, which additionally can be used to (simultaneously) derive the transient response of an RLC circuit.[8][10] However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.[10]

Definition

Fig 2. When function \displaystyle \scriptstyle A\cdot e^{i(\omega t+\theta )} is depicted in the complex plane, the vector formed by its imaginary and real parts rotates around the origin. Its magnitude is A, and it completes one cycle every 2π/ω seconds. θ is the angle it forms with the real axis at t = n•2π/ω, for integer values of n.

 

Euler’s formula indicates that sinusoids can be represented mathematically as the sum of two complex-valued functions:

\displaystyle A\cdot \cos(\omega t+\theta )=A\cdot {\frac {e^{i(\omega t+\theta )}+e^{-i(\omega t+\theta )}}{2}}, [a]

or as the real part of one of the functions:

\displaystyle {\begin{aligned}A\cdot \cos(\omega t+\theta )=\operatorname {Re} \{A\cdot e^{i(\omega t+\theta )}\}=\operatorname {Re} \{Ae^{i\theta }\cdot e^{i\omega t}\}.\end{aligned}}

The function \displaystyle A\cdot e^{i(\omega t+\theta )} is called the analytic representation of \displaystyle A\cdot \cos(\omega t+\theta ) . Figure 2 depicts it as a rotating vector in a complex plane. It is sometimes convenient to refer to the entire function as a phasor,[11] as we do in the next section. But the term phasor usually implies just the static vector \displaystyle Ae^{i\theta } . An even more compact representation of a phasor is the angle notation:  \displaystyle A\angle \theta . See also vector notation.

Phasor arithmetic

Multiplication by a constant (scalar)

Multiplication of the phasor  \displaystyle Ae^{i\theta }e^{i\omega t} by a complex constant,   \displaystyle Be^{i\phi }  , produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:

\displaystyle {\begin{aligned}\operatorname {Re} \{(Ae^{i\theta }\cdot Be^{i\phi })\cdot e^{i\omega t}\}&=\operatorname {Re} \{(ABe^{i(\theta +\phi )})\cdot e^{i\omega t}\}\\&=AB\cos(\omega t+(\theta +\phi ))\end{aligned}}

In electronics,  \displaystyle Be^{i\phi } would represent an impedance, which is independent of time. In particular it is not the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid.

Differentiation and integration

The time derivative or integral of a phasor produces another phasor.[b] For example:

\displaystyle {\begin{aligned}\operatorname {Re} \left\{{\frac {d}{dt}}(Ae^{i\theta }\cdot e^{i\omega t})\right\}=\operatorname {Re} \{Ae^{i\theta }\cdot i\omega e^{i\omega t}\}=\operatorname {Re} \{Ae^{i\theta }\cdot e^{i\pi /2}\omega e^{i\omega t}\}=\operatorname {Re} \{\omega Ae^{i(\theta +\pi /2)}\cdot e^{i\omega t}\}=\omega A\cdot \cos(\omega t+\theta +\pi /2)\end{aligned}}

Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant \displaystyle i\omega =(e^{i\pi /2}\cdot \omega ) .

Similarly, integrating a phasor corresponds to multiplication by \displaystyle {\frac {1}{i\omega }}={\frac {e^{-i\pi /2}}{\omega }} . The time-dependent factor,  \displaystyle e^{i\omega t} , is unaffected.

When we solve a linear differential equation with phasor arithmetic, we are merely factoring \displaystyle e^{i\omega t} out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit:

\displaystyle {\frac {d\ v_{C}(t)}{dt}}+{\frac {1}{RC}}v_{C}(t)={\frac {1}{RC}}v_{S}(t)

When the voltage source in this circuit is sinusoidal:

\displaystyle v_{S}(t)=V_{P}\cdot \cos(\omega t+\theta ),

we may substitute \displaystyle {\begin{aligned}v_{S}(t)&=\operatorname {Re} \{V_{s}\cdot e^{i\omega t}\}\\\end{aligned}}

\displaystyle v_{C}(t)=\operatorname {Re} \{V_{c}\cdot e^{i\omega t}\},

where phasor \displaystyle V_{s}=V_{P}e^{i\theta } , and phasor \displaystyle V_{c} is the unknown quantity to be determined.

In the phasor shorthand notation, the differential equation reduces to

\displaystyle i\omega V_{c}+{\frac {1}{RC}}V_{c}={\frac {1}{RC}}V_{s} 

Solving for the phasor capacitor voltage gives

\displaystyle V_{c}={\frac {1}{1+i\omega RC}}\cdot (V_{s})={\frac {1-i\omega RC}{1+(\omega RC)^{2}}}\cdot (V_{P}e^{i\theta })

As we have seen, the factor multiplying \displaystyle V_{s} represents differences of the amplitude and phase of \displaystyle v_{C}(t) relative to \displaystyle V_{P} and \displaystyle \theta .

In polar coordinate form, it is

\displaystyle {\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot e^{-i\phi (\omega )},{\text{ where }}\phi (\omega )=\arctan(\omega RC).

Therefore

\displaystyle v_{C}(t)={\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot V_{P}\cos(\omega t+\theta -\phi (\omega ))

Addition

The sum of phasors as addition of rotating vectors

The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency:

\displaystyle {\begin{aligned}A_{1}\cos(\omega t+\theta _{1})+A_{2}\cos(\omega t+\theta _{2})&=\operatorname {Re} \{A_{1}e^{i\theta _{1}}e^{i\omega t}\}+\operatorname {Re} \{A_{2}e^{i\theta _{2}}e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{A_{1}e^{i\theta _{1}}e^{i\omega t}+A_{2}e^{i\theta _{2}}e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{(A_{1}e^{i\theta _{1}}+A_{2}e^{i\theta _{2}})e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{(A_{3}e^{i\theta _{3}})e^{i\omega t}\}\\[8pt]&=A_{3}\cos(\omega t+\theta _{3}),\end{aligned}}

where

\displaystyle A_{3}^{2}=(A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2})^{2}+(A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2})^{2},
\displaystyle \theta _{3}=\arctan \left({\frac {A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2}}{A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2}}}\right)

or, via the law of cosines on the complex plane (or the trigonometric identity for angle differences):

\displaystyle A_{3}^{2}=A_{1}^{2}+A_{2}^{2}-2A_{1}A_{2}\cos(180^{\circ }-\Delta \theta )=A_{1}^{2}+A_{2}^{2}+2A_{1}A_{2}\cos(\Delta \theta ),

where \displaystyle \Delta \theta =\theta _{1}-\theta _{2} .

A key point is that A3 and θ3 do not depend on ω or t, which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In angle notation, the operation shown above is written

\displaystyle A_{1}\angle \theta _{1}+A_{2}\angle \theta _{2}=A_{3}\angle \theta _{3}.

Another way to view addition is that two vectors with coordinates A1 cos(ωt + θ1), A1 sin(ωt + θ1) ] and A2 cos(ωt + θ2), A2 sin(ωt + θ2) ] are added vectorially to produce a resultant vector with coordinatesA3 cos(ωt + θ3), A3 sin(ωt + θ3) ]. (see animation)

Phasor diagram of three waves in perfect destructive interference

In physics, this sort of addition occurs when sinusoids interfere with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: “What phase difference would be required between three identical sinusoids for perfect cancellation?” In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (​3π2 radians), or one third of a wavelength ​λ3. So the phase difference between each wave must also be 120°, as is the case in three-phase power

In other words, what this shows is that

\displaystyle \cos(\omega t)+\cos(\omega t+2\pi /3)+\cos(\omega t-2\pi /3)=0.

In the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength \displaystyle \lambda . This is why in single slit diffraction, the minima occur when light from the far edge travels a full wavelength further than the light from the near edge.

As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2π radians representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time, t = 0. When the vector is horizontal the tip of the vector represents the angles at 0°, 180°, and at 360°.

Likewise, when the tip of the vector is vertical it represents the positive peak value, ( +Amax ) at 90° or ​π2 and the negative peak value, ( −Amax ) at 270° or ​3π2. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represent a scaled voltage or current value of a rotating vector which is “frozen” at some point in time, ( t ) and in our example above, this is at an angle of 30°.

Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the alternating quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at time t = 0 with a corresponding phase angle in either degrees or radians.

But if a second waveform starts to the left or to the right of this zero point, or if we want to represent in phasor notation the relationship between the two waveforms, then we will need to take into account this phase difference, Φ of the waveform. Consider the diagram below from the previous Phase Difference tutorial.

 

 

 

 

 

 

 

 

STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 III‧阻抗‧A‧上

100px-Hero_of_Alexandria

Aeolipile_illustration

200px-Heron's_Windwheel

200px-Heron2

生於公元十年卒於公元七十年之古希臘數學家亞歷山卓的希羅 Ἥρων ὁ Ἀλεξανδρεύς 居住在埃及托勒密時期的羅馬省。希羅是一名活躍於其家鄉的工程師,他被認為是古代最偉大的實驗家。在亞歷山大大帝征服波斯帝國後之希臘化時代的文明裡,他的著作於科學傳統方面享負盛名。由於希羅大部份的作品 ── 包含了數學、力學、物理和氣體力學 ── 都以講稿的形式出現,因此人們認為他曾經在繆斯之家教學,可能也在亞歷山大圖書館授課。

希羅的發明林林總總,有人說其中最著名的是『風琴』,這或許是最早利用『風能』的裝置。另一是稱作『汽轉球』的蒸汽機,這個『蒸汽機』可比『工業革命』早了二千年 。在其著作《機械學與光學》中,描述了世界上第一部『自動販賣機』︰使用者將硬幣投入機器頂上的槽,槽接受了硬幣後,這台機器就會分配一定份量的『聖水』給投幣者 。一般認為希羅是一位『原子論』者,他的一些思想源自於克特西比烏斯 Ctesibius 的著作,從他的各種發明來看,他的創造具有時代之『超越性』!

據聞希羅也是第一個體認到『虛數』 imaginary number 的人!!

220px-Algebra_by_Rafael_Bombelli

150px-Leonhard_Euler_2
e^{i \pi} + 1 = 0

200px-Euler's_formula.svg

Complex_conjugate_picture.svg

220px-ImaginaryUnit5.svg

一五七二年義大利數學家拉斐爾‧邦貝利 Rafael Bombelli 是文藝復興時期歐洲著名的工程師,也是一個卓越的數學家,出版了《代數學》 L’Algebra 一書,他在書中討論了『負數的平方根\sqrt{- a}, \ a>0,這在歐洲產生了廣泛影響力。

一六三七年笛卡爾在他的著作《幾何學》 La Géométrie 書中創造了『虛數』imaginary numbers 一詞,說明這種『真實上並不存在的數字』。

瑞士大數學家和物理學家李昂哈德‧尤拉 Leonhard Euler 傳說年輕時曾研讀神學,一生虔誠篤信上帝,並不能容忍任何詆毀上帝的言論在他面前發表。一回,德尼‧狄德羅 Denis Diderot ── 法國啟蒙思想家、唯物主義哲學家 、無神論者和作家,百科全書派的代表 ── 造訪葉卡捷琳娜二世的宮廷,尤拉挑戰狄德羅說︰『先生,e^{i \pi} + 1 = 0,所以上帝存在,請回答!』。作者以為這或許只是個『杜撰』。然而尤拉是位多產的作家,一生著作有六十到八十巨冊。一七八三年九月十八日,晚餐後,尤拉邊喝著茶邊和小孫女玩耍,突然間,煙斗從他手中掉了下來。他說了聲 :『我的煙斗』,將彎腰去撿,就再也沒有站起來了,他祇是抱著頭說了一句:『我死了』 。法國哲學家馬奎斯‧孔多塞 marquis de Condorcet 講︰..il cessa de calculer et de vivre,『尤拉停止了計算和生命』!!

一七九七年挪威‧丹麥數學家卡斯帕爾‧韋塞爾 Caspar Wessel 在『Royal Danish Academy of Sciences and Letters』上發表了『Om directionens analytiske betegning』,提出了『複數平面』,研究了複數的幾何意義,由於是用『丹麥文』寫成的,幾乎沒有引起任何重視。一八零六年法國業餘數學家讓-羅貝爾‧阿爾岡 Jean-Robert Argand 與一八三一年德國著名大數學家约翰‧卡爾‧弗里德里希‧高斯 Johann Karl Friedrich Gauß 都再次『重新發現』同一結果!!

虛數軸和實數軸構成的平面稱作複數平面,複平面上每一點對應著一個複數。 

─── 《【SONIC Π】電聲學補充《二》

 

如果人已知廣義『複變正弦波』  Complex Sinusoids

\displaystyle y(t) = \mathcal A e^{st}, \ where

\displaystyle \mathcal A = A \cdot e^{j \phi}

\displaystyle s = \sigma + j \omega

可以統攝多種常用『積分變換』,故可將線性非時變 LTI 系統之『微分方程式』轉化成『代數方程組』來求解也。

而且能夠舉一反三,明白『慣性』m 之於『牛頓力學』 \vec{F} = m \cdot \vec{a} ,度量『抵抗運動變化』的能力,或可比擬為『電路電阻』 R 之於『歐姆定律』 V = R \cdot I

那麼將怎麼看待這個『動量』

\frac{d}{dt} \vec{p} = \vec{F}

表達形式呢?難到『速度快』非運動改變之『阻抗』耶!?

炎日讀文,心靜自然涼乎?☆

Electrical impedance

Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. The term complex impedance may be used interchangeably.

Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of a sinusoidal voltage between its terminals to the complex representation of the current flowing through it.[1] In general, it depends upon the frequency of the sinusoidal voltage.

Impedance extends the concept of resistance to AC circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude. When a circuit is driven with direct current (DC), there is no distinction between impedance and resistance; the latter can be thought of as impedance with zero phase angle.

The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law. In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix.[2]

Impedance is a complex number, with the same units as resistance, for which the SI unit is the ohm (Ω). Its symbol is usually Z, and it may be represented by writing its magnitude and phase in the form |Z|∠θ. However, cartesian complex number representation is often more powerful for circuit analysis purposes.

The reciprocal of impedance is admittance, whose SI unit is the siemens, formerly called mho.

Introduction

The term impedance was coined by Oliver Heaviside in July 1886.[3][4] Arthur Kennelly was the first to represent impedance with complex numbers in 1893.[5]

In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part.

Impedance is defined as the frequency domain ratio of the voltage to the current.[6] In other words, it is the voltage–current ratio for a single complex exponential at a particular frequency ω.

For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular:

  • The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude;
  • the phase of the complex impedance is the phase shift by which the current lags the voltage.

Quantitatively, the impedance of a two-terminal network is represented as a complex quantity \displaystyle \scriptstyle Z \scriptstyle Z, defined in Cartesian form as

 \displaystyle \ Z=R+jX.

Here the real part of impedance is the resistance \displaystyle \scriptstyle R \scriptstyle R, and the imaginary part is the reactance \displaystyle \scriptstyle X \scriptstyle X.

The polar form conveniently captures both magnitude and phase characteristics as

 \displaystyle \ Z=|Z|e^{j\arg(Z)}

where the magnitude \displaystyle \scriptstyle |Z| represents the ratio of the voltage difference amplitude to the current amplitude, while the argument \displaystyle \scriptstyle \arg(Z) (commonly given the symbol \displaystyle \scriptstyle \theta gives the phase difference between voltage and current. \displaystyle \scriptstyle j is the imaginary unit, and is used instead of \displaystyle \scriptstyle i in this context to avoid confusion with the symbol for electric current.

Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.

Complex impedance

A graphical representation of the complex impedance plane

The impedance of a two-terminal circuit element is represented as a complex quantity \displaystyle \scriptstyle Z and the term complex impedance may also be used.

The polar form conveniently captures both magnitude and phase characteristics as

 \displaystyle \ Z=|Z|e^{j\arg(Z)}

where the magnitude \displaystyle \scriptstyle |Z| represents the ratio of the voltage difference amplitude to the current amplitude, while the argument \displaystyle \scriptstyle \arg(Z) (commonly given the symbol \displaystyle \scriptstyle \theta } gives the phase difference between voltage and current. \displaystyle \scriptstyle j is theimaginary unit, and is used instead of \displaystyle \scriptstyle i in this context to avoid confusion with the symbol for electric current.

In Cartesian form, impedance is defined as

 \displaystyle \ Z=R+jX

where the real part of impedance is the resistance \displaystyle \scriptstyle R and the imaginary part is the reactance \displaystyle \scriptstyle X .

Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.

Complex voltage and current

Generalized impedances in a circuit can be drawn with the same symbol as a resistor (US ANSI or DIN Euro) or with a labeled box.

To simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as \displaystyle \scriptstyle V and \displaystyle \scriptstyle I\scriptstyle I.[7][8]

\displaystyle {\begin{aligned}V&=|V|e^{j(\omega t+\phi _{V})},\\I&=|I|e^{j(\omega t+\phi _{I})}.\end{aligned}}

The impedance of a bipolar circuit is defined as the ratio of these quantities:

\displaystyle Z={\frac {V}{I}}={\frac {|V|}{|I|}}e^{j(\phi _{V}-\phi _{I})}.

Hence, denoting \displaystyle \theta =\phi _{V}-\phi _{I} , we have

\displaystyle {\begin{aligned}|V|&=|I||Z|,\\\phi _{V}&=\phi _{I}+\theta .\end{aligned}}

The magnitude equation is the familiar Ohm’s law applied to the voltage and current amplitudes, while the second equation defines the phase relationship.

Validity of complex representation

This representation using complex exponentials may be justified by noting that (by Euler’s formula):

\displaystyle \ \cos(\omega t+\phi )={\frac {1}{2}}{\Big [}e^{j(\omega t+\phi )}+e^{-j(\omega t+\phi )}{\Big ]}

The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term; the results will be identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that

\displaystyle \ \cos(\omega t+\phi )=\Re {\Big \{}e^{j(\omega t+\phi )}{\Big \}}

Ohm’s law

 

The meaning of electrical impedance can be understood by substituting it into Ohm’s law.[9][10] Assuming a two-terminal circuit element with impedance \displaystyle \scriptstyle Z is driven by a sinusoidal voltage or current as above, there holds

\displaystyle \ V=IZ=I|Z|e^{j\arg(Z)}

The magnitude of the impedance \displaystyle \scriptstyle |Z| acts just like resistance, giving the drop in voltage amplitude across an impedance \displaystyle \scriptstyle Z for a given current \displaystyle \scriptstyle I \scriptstyle I. The phase factor tells us that the current lags the voltage by a phase of \displaystyle \scriptstyle \theta \;=\;\arg(Z) (i.e., in the time domain, the current signal is shifted \displaystyle \scriptstyle {\frac {\theta }{2\pi }}T later with respect to the voltage signal).

Just as impedance extends Ohm’s law to cover AC circuits, other results from DC circuit analysis, such as voltage division, current division, Thévenin’s theorem and Norton’s theorem, can also be extended to AC circuits by replacing resistance with impedance.

Phasors

 

A phasor is represented by a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids, where they can often reduce a differential equation problem to an algebraic one.

The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm’s law given above, recognising that the factors of \displaystyle \scriptstyle e^{j\omega t} cancel.