教育和學習︰ Up《grade》【一】

將要選擇什麼打造 PIXEL PC 呢?幾經考慮看上了類似樹莓派 3B 的 Up-board ︰

Welcome to the UP Board wiki!

The purpose of these wiki pages is to collect and share all useful technical information related to UP Board hardware, software, and its usage.

Feel free to contribute content and corrections which may be useful to others!

 

依著 Up Ubuntu 安裝步驟,先下載 Raspbian 映像檔。成功開機後就卡在

sudo add-apt-repository ppa:ubilinux/up

上也。即使手動加入 repo

 

,或因 Ubuntu 套件相容性問題,無法安裝 Up-board kernel 哩︰

 

只好鼻子摸著嘗試自行編譯的呦︰

Compile ubilinux kernel from source

How to compile a custom kernel for UP / UP Squared / Up Core

Build Environment Setup

The following instruction are base on Ubuntu 16.04 host gcc-4.9 is needed for the compilation of the upboard kernel

Getting the source

 

偏偏 Debian Stretch 預設程式庫中沒有 kernel-package 套件,還得在 /etc/apt/sources.list 裡添上

deb http://ftp.debian.org/debian stretch-backports main

repo 呀!

更別說一時哪尋 gcc-4.9 g++-4.9 耶?且用新版

gcc version 6.3.0 20170516 (Debian 6.3.0-18)

,死馬當活馬醫吧!?

近乎三小時的等待,終於產生了 □□.deb 檔︰

 

好佳在

sudo dpkg -i *.deb

成功乎?!

※ 有圖為證

 

 

 

 

 

 

 

 

教育和學習︰ Up《grade》# 契子

雖說︰

春耕夏耘可得時,

秋收冬藏守其法。

大道自如,變易能通◎

這用之於『教育』之『工具』亦然也☆

故爾樹莓派之

PIXEL for PC and Mac

Our vision in establishing the Raspberry Pi Foundation was that everyone should be able to afford their own programmable general-purpose computer. The intention has always been that the Raspberry Pi should be a full-featured desktop computer at a $35 price point. In support of this, and in parallel with our hardware development efforts, we’ve made substantial investments in our software stack. These culminated in the launch of PIXEL in September 2016.

 

PIXEL represents our best guess as to what the majority of users are looking for in a desktop environment: a clean, modern user interface; a curated suite of productivity software and programming tools, both free and proprietary; and the Chromium web browser with useful plugins, including Adobe Flash, preinstalled. And all of this is built on top of Debian, providing instant access to thousands of free applications.

Put simply, it’s the GNU/Linux we would want to use.

The PIXEL desktop on Raspberry Pi

 

Back in the summer, we asked ourselves one simple question: if we like PIXEL so much, why ask people to buy Raspberry Pi hardware in order to run it? There is a massive installed base of PC and Mac hardware out there, which can run x86 Debian just fine. Could we do something for the owners of those machines?

So, after three months of hard work from Simon and Serge, we have a Christmas treat for you: an experimental version of Debian+PIXEL for x86 platforms. Simply download the image, burn it onto a DVD or flash it onto a USB stick, and boot straight into the familiar PIXEL desktop environment on your PC or Mac. Or go out and buy this month’s issue of The MagPi magazine, in stores tomorrow, which has this rather stylish bootable DVD on the cover.

Our first ever covermount

 

You’ll find all the applications you’re used to, with the exception of Minecraft and Wolfram Mathematica (we don’t have a licence to put those on any machine that’s not a Raspberry Pi). Because we’re using the venerable i386 architecture variant it should run even on vintage machines like my ThinkPad X40, provided they have at least 512MB of RAM.

 

誠無心『楚河漢界』的呦!

 

 

 

 

 

 

 

 

【鼎革‧革鼎】︰ Raspbian Stretch 《六之 K.3-言語界面-7.2G 》

處於『雜訊』以及『干擾』之世界中,即使打個『招呼』︰

 

都將與『統計』和『不確定性』為伍耶!?

因是在『【鼎革‧革鼎】…』篇章結束之前,特說『學後而識』的『重要性』,或連『機器』亦不可免乎?!

……

We can appreciate why we need additional intelligence in our systems — heuristics don’t go very far in the world of complex audio signals. We’ll be using scikit-learn’s implementation of the k-NN algorithm for our work here. It proves be a straightforward and easy-to-use implementation. The steps and skills of working with one classifier will scale nicely to working with other, more complex classifiers.

 

揣想是否能借『改寫』派生二之

 

A Python library for audio feature extraction, classification, segmentation and applications

This doc contains general info. Click [here] (https://github.com/tyiannak/pyAudioAnalysis/wiki) for the complete wiki

General

pyAudioAnalysis is a Python library covering a wide range of audio analysis tasks. Through pyAudioAnalysis you can:

  • Extract audio features and representations (e.g. mfccs, spectrogram, chromagram)
  • Classify unknown sounds
  • Train, parameter tune and evaluate classifiers of audio segments
  • Detect audio events and exclude silence periods from long recordings
  • Perform supervised segmentation (joint segmentation – classification)
  • Perform unsupervised segmentation (e.g. speaker diarization)
  • Extract audio thumbnails
  • Train and use audio regression models (example application: emotion recognition)
  • Apply dimensionality reduction to visualize audio data and content similarities

 

『程式庫』至派生三的『經驗』,得到啟發也◎

 

 

 

 

 

 

 

 

【鼎革‧革鼎】︰ Raspbian Stretch 《六之 K.3-言語界面-7.2F 》

登高望遠回首來時路,或許方向看的更清楚吧!?

 

說此『樹莓派挾泰山以超北海』之事︰

Project DeepSpeech

Project DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques, based on Baidu’s Deep Speech research paper. Project DeepSpeech uses Google’s TensorFlow project to make the implementation easier.

 

Pre-built binaries that can be used for performing inference with a trained model can be installed with pip. Proper setup using virtual environment is recommended and you can find that documented below.

A pre-trained English model is available for use, and can be downloaded using the instructions below.

Once everything is installed you can then use the deepspeech binary to do speech-to-text on short, approximately 5 second, audio files (currently only WAVE files with 16-bit, 16 kHz, mono are supported in the Python client):

Alternatively, quicker inference (The realtime factor on a GeForce GTX 1070 is about 0.44.) can be performed using a supported NVIDIA GPU on Linux. (See the release notes to find which GPU’s are supported.) This is done by instead installing the GPU specific package:

See the output of deepspeech -h for more information on the use of deepspeech. (If you experience problems running deepspeech, please check required runtime dependencies).

 

冀免眾裡尋他千百度也?!

 

 

 

 

 

 

 

 

 

【鼎革‧革鼎】︰ Raspbian Stretch 《六之 K.3-言語界面-7.2E 》

為什麼人會有興趣分析『雜訊』呢?因為它幾乎『無所不在』也!且聽聽 JULIUS O. SMITH III 先生說法吧︰

SPECTRAL AUDIO SIGNAL PROCESSING

JULIUS O. SMITH III
Center for Computer Research in Music and Acoustics (CCRMA)

……

Why Analyze Noise?

An example application of noise spectral analysis is denoising, in which noise is to be removed from some recording. On magnetic tape, for example, tape hiss” is well modeled mathematically as a noise process. If we know the noise level in each frequency band (its power level), we can construct time-varying band gains to suppress the noise when it is audible. That is, the gain in each band is close to 1 when the music is louder than the noise, and close to 0 when the noise is louder than the music. Since tape hiss is well modeled as stationary (constant in nature over time), we can estimate the noise level during periods of silence” on the tape.

Another application of noise spectral analysis is spectral modeling synthesis (the subject of §10.4). In this sound modeling technique, sinusoidal peaks are measured and removed from each frame of a short-time Fourier transform (sequence of FFTs over time). The remaining signal energy, whatever it may be, is defined as noise” and resynthesized using white noise through a filter determined by the upper spectral envelope of the noise floor”.

───

 

若是我們知道 □ ○ 『雜訊特徵』,或可改善『信噪比』乎?!

Signal-to-noise ratio

Signal-to-noise ratio (abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise.

S/N ratio is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.

While SNR is commonly quoted for electrical signals, it can be applied to any form of signal (such as isotope levels in an ice core or biochemical signaling between cells or financial trading signals).

The signal-to-noise ratio, the bandwidth, and the channel capacity之時矣◎ of a communication channel are connected by the Shannon–Hartley theorem.

Signal-to-noise ratio is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as “noise” that interferes with the “signal” of appropriate discussion.[1]

 

如果比類那個『白色雜訊』,豈非語音活性檢測

Voice activity detection

Voice activity detection (VAD), also known as speech activity detection or speech detection, is a technique used in speech processing in which the presence or absence of human speech is detected.[1] The main uses of VAD are in speech coding and speech recognition. It can facilitate speech processing, and can also be used to deactivate some processes during non-speech section of an audio session: it can avoid unnecessary coding/transmission of silence packets in Voice over Internet Protocol applications, saving on computation and on network bandwidth.

VAD is an important enabling technology for a variety of speech-based applications. Therefore, various VAD algorithms have been developed that provide varying features and compromises between latency, sensitivity, accuracy and computational cost. Some VAD algorithms also provide further analysis, for example whether the speech is voiced, unvoiced or sustained. Voice activity detection is usually language independent.

It was first investigated for use on time-assignment speech interpolation (TASI) systems.[2]

 

的『理論閾值』發凡啊!?

然而如何能將『不想要』的訊號,都視之為『雜訊』耶??

Signal-to-interference-plus-noise ratio

In information theory and telecommunication engineering, the signal-to-interference-plus-noise ratio (SINR[1]) (also known as the signal-to-noise-plus-interference ratio (SNIR)[2]) is a quantity used to give theoretical upper bounds on channel capacity (or the rate of information transfer) in wireless communication systems such as networks. Analogous to the SNR used often in wired communications systems, the SINR is defined as the power of a certain signal of interest divided by the sum of the interference power (from all the other interfering signals) and the power of some background noise. If the power of noise term is zero, then the SINR reduces to the signal-to-interference ratio (SIR). Conversely, zero interference reduces the SINR to the signal-to-noise ratio (SNR), which is used less often when developing mathematical models of wireless networks such as cellular networks.[3]

The complexity and randomness of certain types of wireless networks and signal propagation has motivated the use of stochastic geometry models in order to model the SINR, particularly for cellular or mobile phone networks.[4]

Description

SINR is commonly used in wireless communication as a way to measure the quality of wireless connections. Typically, the energy of a signal fades with distance, which is referred to as a path loss in wireless networks. Conversely, in wired networks the existence of a wired path between the sender or transmitter and the receiver determines the correct reception of data. In a wireless network ones has to take other factors into account (e.g. the background noise, interfering strength of other simultaneous transmission). The concept of SINR attempts to create a representation of this aspect.

Mathematical definition

The definition of SINR is usually defined for a particular receiver (or user). In particular, for a receiver located at some point x in space (usually, on the plane), then its corresponding SINR given by

  {\mathrm {SINR}}(x){{=}}{\frac {P}{I+N}}

where P is the power of the incoming signal of interest, I is the interference power of the other (interfering) signals in the network, and N is some noise term, which may be a constant or random. Like other ratios in electronic engineering and related fields, the SINR is often expressed in decibels or dB.

 

當下『語音科技』正致力超越

牙牙學語

牙牙學語 是兒童發展語言的一個階段,在這段語言習得期間,嬰兒好像要嘗試用嘴巴把聲音說出來,但還未能產生任何可辨別的詞彙。嬰兒在出生後不久開始會牙牙學語期,此時需經過幾個階段像是嬰兒的聲音行為戲目會擴大,他們的發聲就會變得越來越像語言 。[1] 嬰兒通常年齡大約在12個月時會開始產出能辨認的詞彙,但在這段時間後,牙牙學語期可能還會繼續一段時間。 [2] 牙牙學語可以被視為語言發展的先驅或是僅作為聲音實驗。身體構造的發展也涉及到牙牙學語之中,它仍需要在小孩一歲時發展。[3] 這種持續性的身體發育負責一些能力上的改變還有讓嬰兒能產出不同的聲音變化 。異常發展,如某些健康狀況、發育遲緩和聽力障礙可能阻礙小孩正常牙牙學語的的能力。 雖然仍有人反對語言是人類獨有的能力,但牙牙學語並非人類物種所僅有。[4]

 

之時呦☆

 

 

 

 

 

 

 

輕。鬆。學。部落客