STEM 隨筆︰古典力學︰轉子【五】《電路學》 五【電感】 III‧阻抗‧A

那麽要怎樣理解『複數z = x + i \ y 的呢?如果說『複數』起源於『方程式』的『求解』,比方說 x^2 + 1 = 0, \ x = \pm i,這定義了『i = \sqrt{-1}』,但是它的『意義』依然晦澀。即使說從『複數平面』的每一個『(x, y) 都對應著一個『複數z = x + i \ y 可能還是不清楚『i』的意思到底是什麼?假使再從『複數』的『加法上看』︰

假使 z_1 = x_1 + i \ y_1z_2 = x_2 + i \ y_2

那麼 z_1 + z_2 = (x_1 + x_2) + i \ (y_1 + y_2)

這是一種類似『向量』的加法,是否『i』的意義就藏在其中的呢?

positive_negative_rotation

imaginary_rotation

220px-90-Degree_Rotations_in_the_Complex_Plane

一九九八年美國新罕布希爾大學 University of New Hampshire 的
Paul J. Nahin 教授寫了一本『An Imaginary Tale: the Story of the Square Root of −1』的書,指出韋塞爾當初所講的『幾何意義』就是︰

i = \sqrt{-1} = 1 \ \angle 90^{\circ}

也就是說『i』就是『逆時鐘旋轉九十度』的『運算子』!

假使從複數的『極座標』表示法來看複數的『乘法』︰

假使 z_1 = r \cdot e^{i \ \theta}, \ z_2 = \alpha \cdot e^{i \ \beta},那麼 z_1 \cdot z_2 = \alpha \cdot r \cdot e^{i \ (\theta +\beta)}

就可以解釋成 Z1 『向量』被『逆時鐘旋轉』了『β』角度,它的『長度』被『縮放』了『α』倍!!

複數果真不是簡單的『』啊!也難怪它是『完備的』的喔!!

電子和工程領域中,常常會使用到『正弦』 Sin 信號,一般可以使用『相量』 Phasor 來作簡化分析。『相量』是一個『複數』,也是一種『向量』,通常使用『極座標』表示,舉例來說一個『振幅』是 A,『角頻率』是 \omega,初始『相位角』是 \theta 的『正弦信號』可以表示為 A \ e^{j \  (\omega t + \theta)},這裡的『j』就是『複數的 i』。為什麼又要改用 j = \sqrt{-1} 的呢?這是因為再『電子學』和『電路學』領域中 i 通常代表著『電流』, v 通常代表了『電壓』,因此為了避免『混淆』起見,所以才會『更名用  j』。

尤拉公式 Euler’s formula,是複數分析中的公式,它將三角函數與複數指數函數相關聯,對任意實數 x,都有

e^{j x} = \cos x + j \sin x

,它的重要性是不言而喻的啊!!

300px-Wykres_wektorowy_by_Zureks.svg

Unfasor

─── 《【SONIC Π】電聲學補充《二》

 

要是人們不知道文章的主旨,唸來彷彿鴨子聽雷,不過一大堆符號而已!此處先給個導引︰

‧ 兩個複數 z_1 = x_1 + j \cdot y_1 、 z_2 = x_2 + j \cdot y_2 相等,實部 \Re (z_1) = x_1 、 \Re (z_2) = x_2 與虛部 \Im (z_1) = y_1 、 \Im (z_2) = y_2 對應相等 \Re (z_1) = \Re (z_2), \ \Im (z_1) = \Im (z_2)

\Re\Im 都是線性算子︰

\Re(\alpha \cdot z_1 + \beta \cdot z_2) = \alpha \cdot \Re(z_1) + \beta \cdot \Re(z_2)

\Im(\alpha \cdot z_1 + \beta \cdot z_2) = \alpha \cdot \Im(z_1) + \beta \cdot \Im(z_2)

\Re 以及 \Im 相容於微分算子 \frac{d^n}{d t^n}

\frac{d^n}{d t^n}  \left[ \Re (x(t) + j \cdot y(t)) \right] = \frac{d^n}{d t^n} x(t) = \Re \left[ \frac{d^n}{d t^n} (x(t) + j \cdot y(t)) \right]

\frac{d^n}{d t^n}  \left[ \Im (x(t) + j \cdot y(t)) \right] = \frac{d^n}{d t^n} y(t) = \Im \left[ \frac{d^n}{d t^n} (x(t) + j \cdot y(t)) \right]

 

因此『複變正弦波』 A \ e^{j (\omega t + \theta)} = A e^{j \theta} \cdot e^{j \omega t} = \mathcal A \cdot e^{j \omega t} 將之用於『單頻』 \omega 交流 AC 系統,可將原本的微分方程式改寫成『相量』之代數方程式,簡化計算麻煩呦!!

 

再閱讀乎?

電路元件的阻抗

 

理想電阻器的阻抗 \displaystyle Z_{R} 是實數,稱為「電阻」:

\displaystyle Z_{R}=R 

其中,\displaystyle R 是理想電阻器的電阻。

理想電容器和理想電感器的阻抗 \displaystyle Z_{C} 都是虛數 :

\displaystyle Z_{C}={\frac {1}{j\omega C}} 
\displaystyle Z_{L}=j\omega L 

其中,\displaystyle C 是理想電容器的電容\displaystyle L 是理想電感器的電感

注意到以下兩個很有用的全等式:

\displaystyle j=e^{j\pi /2} 
\displaystyle -j=e^{-j\pi /2} 

應用這些全等式,理想電容器和理想電感器的阻抗以指數形式重寫為

\displaystyle Z_{C}={\frac {e^{-j\pi /2}}{\omega C}} 
\displaystyle Z_{L}=\omega Le^{j\pi /2} 

給定通過某阻抗元件的電流振幅,複阻抗的大小給出這阻抗元件兩端的電壓振幅,而複阻抗的指數因子則給出相位關係。

電阻器、電容器和電感器是三種基本電路元件。以下段落會推導出這些元件的阻抗。這些導引假定正弦信號。通過傅立葉分析,任意信號可以視為一組正弦函數的總和。所以,這些導引可以延伸至任意信號。

電阻器

根據歐姆定律,通過電阻器的含時電流 \displaystyle i_{R}(t) 與電阻器兩端的含時電壓 \displaystyle v_{R}(t) ,兩者之間的關係為

\displaystyle v_{R}(t)=i_{R}(t)R 

其中,\displaystyle t 是時間。

設定含時電壓信號為

\displaystyle v_{R}(t)=V_{0}\cos(\omega t)=V_{0}e^{j\omega t},\qquad V_{0}>0 

則含時電流為

\displaystyle i_{R}(t)={\frac {V_{0}}{R}}e^{j\omega t}

兩者的大小分別為 \displaystyle V_{0} 、\displaystyle V_{0}/R 。所以,阻抗為

\displaystyle Z_{R}=R 

電阻器的阻抗是實數。理想電阻器不會製造相位差。

電容器

通過電容器的含時電流 \displaystyle i_{C}(t) 與電容器兩端的含時電壓 \displaystyle v_{C}(t) ,兩者之間的關係為

\displaystyle i_{C}(t)=C{\frac {\operatorname {d} v_{C}(t)}{\operatorname {d} t}} 

設定含時電壓信號為

\displaystyle v_{C}(t)=V_{0}\sin(\omega t)=\operatorname {Re} \{V_{0}e^{j(\omega t-\pi /2)}\}=\operatorname {Re} \{V_{C}e^{j\omega t}\},\qquad V_{0}>0 

則電流為

\displaystyle i_{C}(t)=\omega V_{0}C\cos(\omega t)=\operatorname {Re} \{\omega V_{0}Ce^{j\omega t}\}=\operatorname {Re} \{I_{C}e^{j\omega t}\} 

兩者的除商為

\displaystyle {\frac {v_{C}(t)}{i_{C}(t)}}={\frac {V_{0}\sin(\omega t)}{\omega V_{0}C\cos(\omega t)}}={\frac {\sin(\omega t)}{\omega C\sin \left(\omega t+{\frac {\pi }{2}}\right)}} 

所以,電容器阻抗的大小為 \displaystyle 1/\omega C ,交流電壓滯後 90° 於交流電流 ,或者,交流電流超前 90° 於交流電壓。

相量形式表示,

\displaystyle V_{C}=V_{0}e^{j(-\pi /2)},\qquad V_{0}>0 
\displaystyle I_{C}=\omega V_{0}Ce^{j0} 
\displaystyle Z_{C}={\frac {e^{-j\pi /2}}{\omega C}} 

或者,應用歐拉公式,

\displaystyle Z_{C}={\frac {1}{j\omega C}} 

電感器

通過電感器的含時電流 \displaystyle i_{L}(t) 與電感器兩端的含時電壓 \displaystyle v_{L}(t) ,兩者之間的關係為

\displaystyle v_{L}(t)=L{\frac {\operatorname {d} i_{L}(t)}{\operatorname {d} t}} 

設定含時電流信號為

\displaystyle i_{L}(t)=I_{0}\cos(\omega t) 

則電壓為

\displaystyle v_{L}(t)=-\omega LI_{0}\sin(\omega t)=\omega LI_{0}\cos(\omega t+\pi /2) 

兩者的除商為

\displaystyle {\frac {v_{L}(t)}{i_{L}(t)}}={\frac {\omega L\cos(\omega t+\pi /2)}{\cos(\omega t)}} 

所以,電感器阻抗的大小為 \displaystyle \omega L ,交流電壓超前 90° 於交流電流,或者,交流電流滯後 90° 於交流電壓。

相量形式表示,

\displaystyle i_{L}(t)=I_{0}e^{j\omega t},\qquad I_{0}>0 
\displaystyle v_{L}(t)=\omega LI_{0}e^{j(\omega t+\pi /2)} 
\displaystyle Z_{L}=\omega Le^{j\pi /2} 

或者,應用歐拉公式,

\displaystyle Z_{L}=j\omega L 

廣義 s-平面阻抗

\displaystyle j\omega 定義阻抗的方法只能應用於以穩定態交流信號為輸入的電路。假若將阻抗概念加以延伸,將 \displaystyle j\omega 改換為複角頻率 \displaystyle s ,就可以應用於以任意交流信號為輸入的電路。表示於時域的信號,經過拉普拉斯變換後,會改為表示於頻域的信號,改成以複角頻率表示。採用這更廣義的標記,基本電路元件的阻抗為

元件 阻抗表達式
電阻器 \displaystyle R
電容器 \displaystyle 1/sC
電感器 \displaystyle sL

對於直流電路,這簡化為 \displaystyle s=0 ;對於穩定正弦交流信號,\displaystyle s=j\omega 。

 

當能明白『相量』祕密耶??

Phasor

In physics and engineering, a phasor (a portmanteau of phase vector[1][2]), is a complex number representing a sinusoidal function whose amplitude (A), angular frequency (ω), and initial phase (θ) are time-invariant. It is related to a more general concept called analytic representation,[3] which decomposes a sinusoid into the product of a complex constant and a factor that encapsulates the frequency and time dependence. The complex constant, which encapsulates amplitude and phase dependence, is known as phasor, complex amplitude,[4][5] and (in older texts) sinor[6] or even complexor.[6]

A common situation in electrical networks is the existence of multiple sinusoids all with the same frequency, but different amplitudes and phases. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be factored into the product of a linear combination of phasors (known as phasor arithmetic) and the time/frequency dependent factor that they all have in common.

The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible for vectors is possible for phasors as well.[6] An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis(calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain.[7][8] The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century.[9][10]

Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the Laplace transform, which additionally can be used to (simultaneously) derive the transient response of an RLC circuit.[8][10] However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.[10]

Definition

Fig 2. When function \displaystyle \scriptstyle A\cdot e^{i(\omega t+\theta )} is depicted in the complex plane, the vector formed by its imaginary and real parts rotates around the origin. Its magnitude is A, and it completes one cycle every 2π/ω seconds. θ is the angle it forms with the real axis at t = n•2π/ω, for integer values of n.

 

Euler’s formula indicates that sinusoids can be represented mathematically as the sum of two complex-valued functions:

\displaystyle A\cdot \cos(\omega t+\theta )=A\cdot {\frac {e^{i(\omega t+\theta )}+e^{-i(\omega t+\theta )}}{2}}, [a]

or as the real part of one of the functions:

\displaystyle {\begin{aligned}A\cdot \cos(\omega t+\theta )=\operatorname {Re} \{A\cdot e^{i(\omega t+\theta )}\}=\operatorname {Re} \{Ae^{i\theta }\cdot e^{i\omega t}\}.\end{aligned}}

The function \displaystyle A\cdot e^{i(\omega t+\theta )} is called the analytic representation of \displaystyle A\cdot \cos(\omega t+\theta ) . Figure 2 depicts it as a rotating vector in a complex plane. It is sometimes convenient to refer to the entire function as a phasor,[11] as we do in the next section. But the term phasor usually implies just the static vector \displaystyle Ae^{i\theta } . An even more compact representation of a phasor is the angle notation:  \displaystyle A\angle \theta . See also vector notation.

Phasor arithmetic

Multiplication by a constant (scalar)

Multiplication of the phasor  \displaystyle Ae^{i\theta }e^{i\omega t} by a complex constant,   \displaystyle Be^{i\phi }  , produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:

\displaystyle {\begin{aligned}\operatorname {Re} \{(Ae^{i\theta }\cdot Be^{i\phi })\cdot e^{i\omega t}\}&=\operatorname {Re} \{(ABe^{i(\theta +\phi )})\cdot e^{i\omega t}\}\\&=AB\cos(\omega t+(\theta +\phi ))\end{aligned}}

In electronics,  \displaystyle Be^{i\phi } would represent an impedance, which is independent of time. In particular it is not the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid.

Differentiation and integration

The time derivative or integral of a phasor produces another phasor.[b] For example:

\displaystyle {\begin{aligned}\operatorname {Re} \left\{{\frac {d}{dt}}(Ae^{i\theta }\cdot e^{i\omega t})\right\}=\operatorname {Re} \{Ae^{i\theta }\cdot i\omega e^{i\omega t}\}=\operatorname {Re} \{Ae^{i\theta }\cdot e^{i\pi /2}\omega e^{i\omega t}\}=\operatorname {Re} \{\omega Ae^{i(\theta +\pi /2)}\cdot e^{i\omega t}\}=\omega A\cdot \cos(\omega t+\theta +\pi /2)\end{aligned}}

Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant \displaystyle i\omega =(e^{i\pi /2}\cdot \omega ) .

Similarly, integrating a phasor corresponds to multiplication by \displaystyle {\frac {1}{i\omega }}={\frac {e^{-i\pi /2}}{\omega }} . The time-dependent factor,  \displaystyle e^{i\omega t} , is unaffected.

When we solve a linear differential equation with phasor arithmetic, we are merely factoring \displaystyle e^{i\omega t} out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit:

\displaystyle {\frac {d\ v_{C}(t)}{dt}}+{\frac {1}{RC}}v_{C}(t)={\frac {1}{RC}}v_{S}(t)

When the voltage source in this circuit is sinusoidal:

\displaystyle v_{S}(t)=V_{P}\cdot \cos(\omega t+\theta ),

we may substitute \displaystyle {\begin{aligned}v_{S}(t)&=\operatorname {Re} \{V_{s}\cdot e^{i\omega t}\}\\\end{aligned}}

\displaystyle v_{C}(t)=\operatorname {Re} \{V_{c}\cdot e^{i\omega t}\},

where phasor \displaystyle V_{s}=V_{P}e^{i\theta } , and phasor \displaystyle V_{c} is the unknown quantity to be determined.

In the phasor shorthand notation, the differential equation reduces to

\displaystyle i\omega V_{c}+{\frac {1}{RC}}V_{c}={\frac {1}{RC}}V_{s} 

Solving for the phasor capacitor voltage gives

\displaystyle V_{c}={\frac {1}{1+i\omega RC}}\cdot (V_{s})={\frac {1-i\omega RC}{1+(\omega RC)^{2}}}\cdot (V_{P}e^{i\theta })

As we have seen, the factor multiplying \displaystyle V_{s} represents differences of the amplitude and phase of \displaystyle v_{C}(t) relative to \displaystyle V_{P} and \displaystyle \theta .

In polar coordinate form, it is

\displaystyle {\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot e^{-i\phi (\omega )},{\text{ where }}\phi (\omega )=\arctan(\omega RC).

Therefore

\displaystyle v_{C}(t)={\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot V_{P}\cos(\omega t+\theta -\phi (\omega ))

Addition

The sum of phasors as addition of rotating vectors

The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency:

\displaystyle {\begin{aligned}A_{1}\cos(\omega t+\theta _{1})+A_{2}\cos(\omega t+\theta _{2})&=\operatorname {Re} \{A_{1}e^{i\theta _{1}}e^{i\omega t}\}+\operatorname {Re} \{A_{2}e^{i\theta _{2}}e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{A_{1}e^{i\theta _{1}}e^{i\omega t}+A_{2}e^{i\theta _{2}}e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{(A_{1}e^{i\theta _{1}}+A_{2}e^{i\theta _{2}})e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{(A_{3}e^{i\theta _{3}})e^{i\omega t}\}\\[8pt]&=A_{3}\cos(\omega t+\theta _{3}),\end{aligned}}

where

\displaystyle A_{3}^{2}=(A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2})^{2}+(A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2})^{2},
\displaystyle \theta _{3}=\arctan \left({\frac {A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2}}{A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2}}}\right)

or, via the law of cosines on the complex plane (or the trigonometric identity for angle differences):

\displaystyle A_{3}^{2}=A_{1}^{2}+A_{2}^{2}-2A_{1}A_{2}\cos(180^{\circ }-\Delta \theta )=A_{1}^{2}+A_{2}^{2}+2A_{1}A_{2}\cos(\Delta \theta ),

where \displaystyle \Delta \theta =\theta _{1}-\theta _{2} .

A key point is that A3 and θ3 do not depend on ω or t, which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In angle notation, the operation shown above is written

\displaystyle A_{1}\angle \theta _{1}+A_{2}\angle \theta _{2}=A_{3}\angle \theta _{3}.

Another way to view addition is that two vectors with coordinates A1 cos(ωt + θ1), A1 sin(ωt + θ1) ] and A2 cos(ωt + θ2), A2 sin(ωt + θ2) ] are added vectorially to produce a resultant vector with coordinatesA3 cos(ωt + θ3), A3 sin(ωt + θ3) ]. (see animation)

Phasor diagram of three waves in perfect destructive interference

In physics, this sort of addition occurs when sinusoids interfere with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: “What phase difference would be required between three identical sinusoids for perfect cancellation?” In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (​3π2 radians), or one third of a wavelength ​λ3. So the phase difference between each wave must also be 120°, as is the case in three-phase power

In other words, what this shows is that

\displaystyle \cos(\omega t)+\cos(\omega t+2\pi /3)+\cos(\omega t-2\pi /3)=0.

In the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength \displaystyle \lambda . This is why in single slit diffraction, the minima occur when light from the far edge travels a full wavelength further than the light from the near edge.

As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2π radians representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time, t = 0. When the vector is horizontal the tip of the vector represents the angles at 0°, 180°, and at 360°.

Likewise, when the tip of the vector is vertical it represents the positive peak value, ( +Amax ) at 90° or ​π2 and the negative peak value, ( −Amax ) at 270° or ​3π2. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represent a scaled voltage or current value of a rotating vector which is “frozen” at some point in time, ( t ) and in our example above, this is at an angle of 30°.

Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the alternating quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at time t = 0 with a corresponding phase angle in either degrees or radians.

But if a second waveform starts to the left or to the right of this zero point, or if we want to represent in phasor notation the relationship between the two waveforms, then we will need to take into account this phase difference, Φ of the waveform. Consider the diagram below from the previous Phase Difference tutorial.