【鼎革‧革鼎】︰ Raspbian Stretch 《六之 J.3‧MIR-13.5… 》

怎麼倣真一架古代樂器呢?如我們用遠超過 CD 之取樣頻率的 44.1k ,假設擷取每個『長短強弱』多次操作音符的『波形』是否足夠哩?

Wavetable synthesis

Wavetable synthesis is a sound synthesis technique that employs arbitrary periodic waveforms in the production of musical tones or notes. The technique was developed by Wolfgang Palm of Palm Products GmbH (PPG) in the late 1970s [1] and published in 1979,[2] and has since been used as the primary synthesis method in synthesizers built by PPG and Waldorf Music and as an auxiliary synthesis method by Ensoniq and Access. It is currently used in software-based synthesizers for PCs and tablets, including apps offered by PPG and Waldorf, among others.

It was also independently developed in a similar time frame by Michael McNabb, who used it in his 1978 composition Dreamsong.[3][4]

Principle

Wavetable synthesis is fundamentally based on periodic reproduction of an arbitrary, single-cycle waveform.[5] In wavetable synthesis, some method is employed to vary or modulate the selected waveform in the wavetable. The position in the wavetable selects the single cycle waveform. Digital interpolation between adjacent waveforms allows for dynamic and smooth changes of the timbre of the tone produced. Sweeping the wavetable in either direction can be controlled in a number of ways, for example, by use of an LFO, envelope, pressure or velocity.

Many wavetables used in PPG and Ensoniq synthesizers can simulate the methods used by analog synthesizers, such as Pulse Width Modulation by utilising a number of square waves of different duty cycles. In this way, when the wavetable is swept, the duty cycle of the pulse wave will appear to change over time. As the early Ensoniq wavetable synthesizers had non resonant filters (the PPG Wave synthesizers used analogue Curtis resonant filters), some wavetables contained highly resonant waveforms to overcome this limitation of the filters.

Confusion with sample-based synthesis (S&S) and Digital Wave Synthesis

In 1992, with the introduction of the Creative Labs Sound Blaster 16 the term “wavetable” started to be (incorrectly) applied as a marketing term to their sound card. However, these sound cards did not employ any form of wavetable synthesis, but rather PCM samples and FM synthesis. S&S (Sample and Synthesis) and Digital Wave Synthesis was the main method of sound synthesis utilised by digital synthesizers starting in the mid 80’s with synthesizers such as Sequential Circuits Prophet VS, Korg DW6000/8000 (DW standing for Digital Wave), Roland D50 and Korg M1 through to current synthesizers.

 

若依『聲波可加性』之原理 ,今日應該可以『完美合成』的吧!

當真仍舊遲遲難能耶??

合成器

聲音合成器(通常簡稱為「 合成 」或「 合成器 」)是一種電子樂器(Electronic musical instrument),其產生轉換為聲音通過電信號儀表放大器揚聲器耳機。合成器既可以模仿仍然存在的聲音(樂器 ,如鋼琴電子琴長笛、人聲、自然的聲音如海浪等),或生成新的電子音色(Timbres)。它們通常以音樂鍵盤(Musical keyboard)控制。無內置控制器合成通常被稱為聲音模塊,並通過被控制MIDICV/門(CV/Gate)使用控制器設備,常常一個MIDI鍵盤或其他控制器。

合成器可利用Oscillator(振盪器)、Filter(濾波器)、效果器等來自由改造聲音的音色,來模擬各種聲音。

還有ARP(琶音器)來選擇音色想要自由跑的節奏與音高。

合成器有三種發聲方式:

  1. 直接改變電壓(如模擬合成器)。
  2. 利用電腦做數學運算(如軟體合成器)。
  3. 綜合以上二種,最後會產生電壓訊號會使揚聲器耳機之薄膜振動 。合成器所發之聲音和錄音設備裡的自然聲音(natural sound)不同,錄音(recording)是將聲波的機械能(mechanical energy)轉換為訊號,並且可以經由播放(playback)把訊息轉回機械能(雖然取樣技術會導致失真)。

合成器通常是以鍵盤做為操控介面,並且常常被當作一種鍵盤樂器。但是其實合成器之操控介面並不一定是鍵盤,還包括如指板控制器、風控制器、帶控制器等,甚至不必為人所控制。(詳見音源器
合成器首先在在20世紀60年代應用在流行音樂。在20世紀70年代,尤其是在70年代末,迪斯科音樂大量使用合成器。在20世紀80年代 ,價格相對較為便宜的 山葉DX7 使數字合成器更容易獲得。20世紀80年代的流行音樂和舞曲常常大量使用合成器。在2010年代,合成器在流行音樂搖滾音樂舞曲等許多流派中採用。21世紀的當代古典音樂作曲家也有使用合成器創作的。現今的樂團鍵盤手也多使用合成器來演奏。

Trautonium合成器,1928年

 

雖其中物理『合成模型、方法』不免有『非線性』!!

Physical modelling synthesis

Physical modelling synthesis refers to sound synthesis methods in which the waveform of the sound to be generated is computed using a mathematical model, a set of equations and algorithms to simulate a physical source of sound, usually a musical instrument.

General methodology

Modelling attempts to replicate laws of physics that govern sound production, and will typically have several parameters, some of which are constants that describe the physical materials and dimensions of the instrument, while others are time-dependent functions describing the player’s interaction with the instrument, such as plucking a string, or covering toneholes.

For example, to model the sound of a drum, there would be a mathematical model of how striking the drumhead injects energy into a two-dimensional membrane. Incorporating this, a larger model would simulate the properties of the membrane (mass density, stiffness, etc.), its coupling with the resonance of the cylindrical body of the drum, and the conditions at its boundaries (a rigid termination to the drum’s body), describing its movement over time and thus its generation of sound.

Similar stages to be modelled can be found in instruments such as a violin, though the energy excitation in this case is provided by the slip-stick behavior of the bow against the string, the width of the bow, the resonance and damping behavior of the strings, the transfer of string vibrations through the bridge, and finally, the resonance of the soundboard in response to those vibrations.

In addition, the same concept has been applied to simulate voice and speech sounds.[1] In this case, the synthesizer includes mathematical models of the vocal fold oscillation and associated laryngeal airflow, and the consequent acoustic wave propagation along the vocal tract. Further, it may also contain an articulatory model to control the vocal tract shape in terms of the position of the lips, tongue and other organs.

Although physical modelling was not a new concept in acoustics and synthesis, having been implemented using finite difference approximations of the wave equation by Hiller and Ruiz in 1971, it was not until the development of the Karplus-Strong algorithm, the subsequent refinement and generalization of the algorithm into the extremely efficient digital waveguide synthesis by Julius O. Smith III and others, and the increase in DSP power in the late 1980s[2] that commercial implementations became feasible.

Yamaha contracted with Stanford University in 1989[3] to jointly develop digital waveguide synthesis; subsequently, most patents related to the technology are owned by Stanford or Yamaha.

The first commercially available physical modelling synthesizer made using waveguide synthesis was the Yamaha VL1 in 1994.[4]

While the efficiency of digital waveguide synthesis made physical modelling feasible on common DSP hardware and native processors, the convincing emulation of physical instruments often requires the introduction of non-linear elements, scattering junctions, etc. In these cases, digital waveguides are often combined with FDTD,[5] finite element or wave digital filter methods, increasing the computational demands of the model.[6]

 

何不開始閱讀 JULIUS O. SMITH III 先生之

PHYSICAL AUDIO SIGNAL PROCESSING FOR VIRTUAL MUSICAL INSTRUMENTS AND AUDIO EFFECTS

JULIUS O. SMITH III
Center for Computer Research in Music and Acoustics (CCRMA)

 

自己親嚐乎☆☆

Introduction to Physical Signal Models

This book is about techniques for building real-time computational physical models of musical instruments and audio effects. So, why would anyone want to do this, and what exactly is a “computational physical model”?

There are several reasons one might prefer a computational model in place of its real-world counterpart:

  • A virtual musical instrument (or audio effect) is typically much less expensive than the corresponding real instrument (effect). Consider, for example, the relative expense of a piano versus its simulation in software. (We discuss real-time piano modeling in §9.4.)
  • Different instruments can share common controllers (such as keyboards, wheels, pedals, etc.). Any number of virtual instruments and/or effects can be quickly loaded as “presets”.
  • Sound quality (“signal to noise ratio”) can far exceed what is possible with a recording. This is because we can use any number of bits per sample, rendering the “noise floor” completely inaudible at all times.
  • Software implementations are exactly repeatable. They never need to be “tuned” or “calibrated” like real-world devices.
  • It is useful to be able to “archive” and periodically revive rare or obsolete devices in virtual form.
  • The future evolution of virtual devices is less constrained than that of real devices.