STEM 隨筆︰古典力學︰轉子【五】《電路學》四【電容】IV‧Laplace‧C

何事 JULIUS O. SMITH III 先生用一整章

Sinusoids and Exponentials

談『正弦波』呢?

 

為什麼高舉『廣義相量』大旗呦!

Generalized Complex Sinusoids

 

因為借之可統攝多種常用變換也!?

Importance of Generalized Complex Sinusoids

 

明白希爾伯特變換之重要性勒◎

Hilbert transform

In mathematics and in signal processing, the Hilbert transform is a linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t).

The Hilbert transform is important in signal processing, where it derives the analytic representation of a signal u(t). This means that the real signal u(t) is extended into the complex plane such that it satisfies the Cauchy–Riemann equations. For example, the Hilbert transform leads to the harmonic conjugate of a given function in Fourier analysis, aka harmonic analysis. Equivalently, it is an example of a singular integral operator and of a Fourier multiplier.

The Hilbert transform was originally defined for periodic functions, or equivalently for functions on the circle, in which case it is given by convolution with the Hilbert kernel. More commonly, however, the Hilbert transform refers to a convolution with the Cauchy kernel, for functions defined on the real lineR (the boundary of the upper half-plane). The Hilbert transform is closely related to the Paley–Wiener theorem, another result relating holomorphic functions in the upper half-plane and Fourier transforms of functions on the real line.

The Hilbert transform is named after David Hilbert, who first introduced the operator to solve a special case of the Riemann–Hilbert problem for holomorphic functions.

The Hilbert transform, in red, of a square wave, in blue

……

Hilbert transform in signal processing

Bedrosian’s theorem

Bedrosian’s theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or

H(f_{LP}(t)f_{HP}(t))=f_{LP}(t)H(f_{HP}(t))

where fLP and fHP are the low- and high-pass signals respectively (Schreier & Scharf 2010, 14).

Amplitude modulated signals are modeled as the product of a bandlimited “message” waveform, um(t), and a sinusoidal “carrier”:

u(t)=u_{m}(t)\cdot \cos(\omega t+\phi )

When um(t) has no frequency content above the carrier frequency,  {\frac {\omega }{2\pi }}{\text{ Hz}}, then by Bedrosian’s theorem:

H(u)(t)=u_{m}(t)\cdot \sin(\omega t+\phi ) (Bedrosian 1962)

Analytic representation

In the context of signal processing, the conjugate function interpretation of the Hilbert transform, discussed above, gives the analytic representation of a signal u(t):

u_{a}(t)=u(t)+i\cdot H(u)(t)

which is a holomorphic function in the upper half plane.

For the narrowband model (above), the analytic representation is:

{\begin{aligned}u_{a}(t)&=u_{m}(t)\cdot \cos(\omega t+\phi )+i\cdot u_{m}(t)\cdot \sin(\omega t+\phi )\\&=u_{m}(t)\cdot \left[\cos(\omega t+\phi )+i\cdot \sin(\omega t+\phi )\right]\end{aligned}}

= u_m (t) \cdot e^{i (\omega t + \phi)} (by Euler’s formula) \ (Eq.1)

This complex heterodyne operation shifts all the frequency components of um(t) above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms.

─── 《【鼎革‧革鼎】︰ RASPBIAN STRETCH 《六之 J.3‧MIR-11 》

 

絲絲相連蛛網結,鎮守中央待時動。

一絲撩撥數絲發,此波未至彼波湧。

 

『空間』平移

Shift operator

In mathematics, and in particular functional analysis, the shift operator also known as translation operator is an operator that takes a function xf(x) to its translation xf(x + a).[1] In time series analysis, the shift operator is called the lag operator.

Shift operators are examples of linear operators, important for their simplicity and natural occurrence. The shift operator action on functions of a real variable plays an important role in harmonic analysis, for example, it appears in the definitions of almost periodic functions, positive definite functions, and convolution.[2] Shifts of sequences (functions of an integer variable) appear in diverse areas such as Hardy spaces, the theory of abelian varieties, and the theory of symbolic dynamics, for which the baker’s map is an explicit representation.

Definition

Functions of a real variable

The shift operator Tt (t ∈ R) takes a function f on R to its translation ft ,

\displaystyle T^{t}f(x)=f_{t}(x)=f(x+t)~.

A practical representation of the linear operator Tt in terms of the plain derivative ddx was introduced by Lagrange,

\displaystyle T^{t}=e^{t{\frac {d}{dx}}}~,

which may be interpreted operationally through its formal Taylor expansion in t; and whose action on the monomial xn is evident by the binomial theorem, and hence on all series in x, and so all functions f(x) as above.[3] This, then, is a formal encoding of the Taylor expansion.

 

本自然!

論及『時間』困擾生?

Anticausal system

An anticausal system is a hypothetical system with outputs and internal states that depend solely on future input values. Some textbooks[1] and published research literature might define an anticausal system to be one that does not depend on past input values, allowing also for the dependence on present input values.

An acausal system is a system that is not a causal system, that is one that depends on some future input values and possibly on some input values from the past or present. This is in contrast to a causal system which depends only on current and/or past input values.[2] This is often a topic of control theory and digital signal processing (DSP).

Anticausal systems are also acausal, but the converse is not always true. An acausal system that has any dependence on past input values is not anticausal.

An example of acausal signal processing is the production of an output signal that is processed from another input signal that is recorded by looking at input values both forward and backward in time from a predefined time arbitrarily denoted as the “present” time. (In reality, that “present” time input, as well as the “future” time input values, have been recorded at some time in the past, but conceptually it can be called the “present” or “future” input values in this acausal process.) This type of processing cannot be done in real time as future input values are not yet known, but is done after the input signal has been recorded and is post-processed.

Digital room correction in some sound reproduction systems rely on acausal filters.

 

且借詞條說那事

Causal filter

In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems (including filters) that are realizable (i.e. that operate in real time) must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time \displaystyle t comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function.

An example of an anti-causal filter is a maximum phase filter, which can be defined as a stable, anti-causal filter whose inverse is also stable and anti-causal.

Example

Each component of the causal filter output begins when its stimulus begins. The outputs of the non-causal filter begin before the stimulus begins.

The following definition is a moving (or “sliding”) average of input data \displaystyle s(x) . A constant factor of 1/2 is omitted for simplicity:

\displaystyle f(x)=\int _{x-1}^{x+1}s(\tau )\,d\tau \ =\int _{-1}^{+1}s(x+\tau )\,d\tau

where x could represent a spatial coordinate, as in image processing. But if \displaystyle x represents time \displaystyle (t) , then a moving average defined that way is non-causal (also called non-realizable), because \displaystyle f(t) depends on future inputs, such as \displaystyle s(t+1) . A realizable output is

\displaystyle f(t-1)=\int _{-2}^{0}s(t+\tau )\,d\tau =\int _{0}^{+2}s(t-\tau )\,d\tau

which is a delayed version of the non-realizable output.

Any linear filter (such as a moving average) can be characterized by a function h(t) called its impulse response. Its output is the convolution

\displaystyle f(t)=(h*s)(t)=\int _{-\infty }^{\infty }h(\tau )s(t-\tau )\,d\tau .

In those terms, causality requires

\displaystyle f(t)=\int _{0}^{\infty }h(\tau )s(t-\tau )\,d\tau

and general equality of these two expressions requires h(t) = 0 for all t < 0.

Characterization of causal filters in the frequency domain

Let h(t) be a causal filter with corresponding Fourier transform H(ω). Define the function

\displaystyle g(t)={h(t)+h^{*}(-t) \over 2}

which is non-causal. On the other hand, g(t) is Hermitian and, consequently, its Fourier transform G(ω) is real-valued. We now have the following relation

\displaystyle h(t)=2\,\Theta (t)\cdot g(t)

where Θ(t) is the Heaviside unit step function.

This means that the Fourier transforms of h(t) and g(t) are related as follows

\displaystyle H(\omega )=\left(\delta (\omega )-{i \over \pi \omega }\right)*G(\omega )=G(\omega )-i\cdot {\widehat {G}}(\omega )

where \displaystyle {\widehat {G}}(\omega ) is a Hilbert transform done in the frequency domain (rather than the time domain). The sign of \displaystyle {\widehat {G}}(\omega ) may depend on the definition of the Fourier Transform.

Taking the Hilbert transform of the above equation yields this relation between “H” and its Hilbert transform:

\displaystyle {\widehat {H}}(\omega )=iH(\omega )

 

欲知詳情讀這文☆

causality