【鼎革‧革鼎】︰ Raspbian Stretch 《六之 J.3‧MIR-13.6 》

自然萬物是科學觀察之園地,技術啟發的寶庫。回聲可以定位︰

Animal echolocation

Echolocation, also called bio sonar, is the biological sonar used by several kinds of animals Echolocating animals emit calls out to the environment and listen to the echoes of those calls that return from various objects near them. They use these echoes to locate and identify the objects. Echolocation is used for navigation and for foraging (or hunting) in various environments. Some blind humans have learned to find their way using clicks produced by a device or by mouth.

Echolocating animals include some mammals and a few birds; most notably microchiropteran bats and odontocetes (toothed whales and dolphins), but also in simpler form in other groups such as shrews, one genus of megachiropteran bats (Rousettus) and two cave dwelling bird groups, the so-called cave swiftlets in the genus Aerodramus (formerly Collocalia) and the unrelated Oilbird Steatornis caripensis.[1]

 300px-Animal_echolocation.svg
A depiction of the ultrasound signals emitted by a bat, and the echo from a nearby object.

 

舞蹈可以傳意︰

蜜蜂舞蹈

蜜蜂舞蹈英語:waggle dance,蜜蜂八字形搖擺舞)為用於表達蜜蜂養殖行為中之蜜蜂特定八字形舞蹈(figure-eight dance)的一個術語。通過進行這個舞蹈,成功覓食者可以與群體的其他成員分享有關產生花蜜和花粉的花,水源,或以新的巢址位置的方向和距離的信息。[1][2]出自於奧地利生物學家諾貝爾獎得主卡爾·馮·弗里希在1940年代的研究翻譯其意義,工蜂在採完花蜜回到蜂巢之後,會進行兩種特別的移動方式。[3]研究對象是一種西方蜜蜂、為卡尼鄂拉蜂。當一隻工蜂回到巢中,其他工蜂會面向她,並以她為中心,就像在觀看這隻蜜蜂跳舞一樣。在發現提出之後經過多年的爭議,最後被大多數生物學家接受,並且成為當代生物學教科書中有關動物行為的經典教材。

Bee_dance

花朵的方向與太陽方向的夾角,等於搖臀直線與地心引力的夾角(α角)。

舞蹈的種類

搖臀舞

蜜 蜂跳舞的移動路徑會形成一個8字形。外圍環狀部分稱做回歸區(return phase);中間直線部分稱做搖臀區(waggle phase),搖臀舞(Waggle dance)因此得名。蜜蜂會一邊搖動臀部一邊走過這條直線,搖臀的持續時間表示食物的距離,搖臀時間愈長,表示食物距離愈遠,以75毫秒代表100公尺。而這段直線與地心引力的方向之夾角,代表食物方向與太陽方向的夾角。之後更發現,蜜蜂會因太陽位置的相對移動而修正直線的角度。

環繞舞

環繞舞(Round dance),一開始被分類為另一種舞蹈,是工蜂用來表達蜂巢附近有食物的存在,但無法表達食物的距離與方向。通常使用在發現近距離的食物(距離小於50-60公尺)。然而後來的研究認為環繞舞並非獨立存在,而是搖臀舞的直線部分極短暫的版本。

搖擺舞交流演化

科學家透過觀察發現不同品種的蜜蜂擁有不同舞蹈的「語言」,每個品種或亞種舞蹈的弧度及時間都各有不同[4][5]一項近期研究顯示在東方蜜蜂西方蜜蜂共同居住的地區,二者能夠逐漸理解對方舞蹈中的「語言」[6]

Waggle_dance

西方蜜蜂的八字形搖擺舞。搖擺舞進行於垂直蜂巢45°的”向上”方向(A圖);即表示食物來源位於蜂巢(B圖)外之太陽右側45°(α角)向上方向。”舞蹈蜜蜂”的腹部因從一邊快速移動到另一邊故出現些許的模糊影像.

─── 《神經網絡【超參數評估】五

 

以大自然為師者,能解鳥語蝶言乎?懂得萬物互相依存耶!若說蜜蜂認識花朵不奇怪!!但思它如何能呢?大哉問焉??既是身處聲、色的世界,視、聽感官自有利於生命之法也。其可仿效學習矣◎

將知『音聲化』訊息之旨趣?!

Sonification

Sonification is the use of non-speech audio to convey information or perceptualize data.[1] Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.

For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.

Though many experiments with data sonification have been explored in forums such as the International Community for Auditory Display (ICAD), sonification faces many challenges to widespread use for presenting and analyzing data. For example, studies show it is difficult, but essential, to provide adequate context for interpreting sonifications of data.[1][2] Many sonification attempts are coded from scratch due to the lack of a flexible tool for sonification research and data exploration[3]

History

The Geiger counter, invented in 1908, is one of the earliest and most successful applications of sonification. A Geiger counter has a tube of low-pressure gas; each particle detected produces a pulse of current when it ionizes the gas, producing an audio click. The original version was only capable of detecting alpha particles. In 1928, Geiger and Walther Müller (a PhD student of Geiger) improved the counter so that it could detect more types of ionizing radiation.

In 1913, Dr. Edmund Fournier d’Albe of University of Birmingham invented the optophone, which used selenium photosensors to detect black print and convert it into an audible output.[4] A blind reader could hold a book up to the device and hold an apparatus to the area she wanted to read. The optophone played a set group of notes: g c’ d’ e’ g’ b’ c” e”. Each note corresponded with a position on the optophone’s reading area, and that note was silenced if black ink was sensed. Thus, the missing notes indicated the positions where black ink was on the page and could be used to read.

Pollack and Ficks published the first perceptual experiments on the transmission of information via auditory display in 1954.[5] They experimented with combining sound dimensions such as timing, frequency, loudness, duration, and spacialization and found that they could get subjects to register changes in multiple dimensions at once. These experiments did not get into much more detail than that, since each dimension had only two possible values.

John M. Chambers, Max Mathews, and F.R. Moore at Bell Laboratories did the earliest work on auditory graphing in their “Auditory Data Inspection” technical memorandum in 1974.[6] They augmented a scatterplot using sounds that varied along frequency, spectral content, and amplitude modulation dimensions to use in classification. They did not do any formal assessment of the effectivenes of these experiments.[7]

In the 1980s, pulse oximeters came into widespread use. Pulse oximeters can sonify oxygen concentration of blood by emitting higher pitches for higher concentrations. However, in practice this particular feature of pulse oximeters may not be widely utilized by medical professionals because of the risk of too many audio stimuli in medical environments.[8]

In 1992, the International Community for Auditory Display (ICAD) was founded by Gregory Kramer as a forum for research on auditory display which includes data sonification. ICAD has since become a home for researchers from many different disciplines interested in the use of sound to convey information through its conference and peer-reviewed proceedings.[9]

 

或想一探究竟!?

SONIPY

an extensive open-source Python framework
for data sonification research and auditory display

 HOME   Welcome to the SoniPy website! 
CONTACT  

Data sonification component processes

What is SoniPy?

SoniPy is an open-source project for the collection, creation and integration of open-source Python modules for data sonification.

What is data sonification?

The general meaning of the term data sonification is the acoustic representation of data for relational interpretation by listeners, for the purpose of increasing their knowledge of the source from which the data was acquired.

What is information sonification?

The term Data sonification does fully describe what occurs in practice, namely that it is often data relations that are sonified rather than the data itself. Relations are abstractions of, or from, the data and the sonification of them more explicitly implies sonifier intent.

When the distinction is not important the term sonification is used without qualification.

Sonification is an interdisciplinary activity, with its rootsin music composition, perceptual psychology, acoustics and psychoacoutics, so it is not surprising that a diverse range of research and production tools is required.
read more … (thesis)

The aim

Finding, testing and integrating components that can work together is a time-consuming and somewhat frustrating task. The SoniPy project collects tools by collating public-domain Python modules which are suitable for the purpose and integrating them using a modular framework. When a suitable tool is not available, the task is to make one.

In doing so, SoniPy aims to provide a community resource for software developers who want to contribute to the development of a comunity resource and for sonifiers who don’t have the time, skill or inclination to independently go through the process of putting together such a coherently-functioning toolset.

 

至少解了

In this exercise notebook, we will segment, feature extract, and analyze audio files. Goals:

  1. Detect onsets in an audio signal.
  2. Segment the audio signal at each onset.
  3. Compute features for each segment.
  4. Gain intuition into the features by listening to each segment separately.

 

標題字詞之『深義』勒!!