如果說『感知器網絡』不過是個『線性分類器』
Linear classifier
In the field of machine learning, the goal of statistical classification is to use an object’s characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object’s characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use.[1]
Definition
If the input feature vector to the classifier is a real vector , then the output score is
where is a real vector of weights and f is a function that converts the dot product of the two vectors into the desired output. (In other words, is a one-form or linear functional mapping onto R.) The weight vector is learned from a set of labeled training samples. Often f is a simple function that maps all values above a certain threshold to the first class and all other values to the second class. A more complex f might give the probability that an item belongs to a certain class.
For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as “yes“, while the others are classified as “no“.
A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when is sparse. Also, linear classifiers often work very well when the number of dimensions in is large, as in document classification, where each element in is typically the number of occurrences of a word in a document (see document-term matrix). In such cases, the classifier should be well-regularized.
In this case, the solid and empty dots can be correctly classified by any number of linear classifiers. H1 (blue) classifies them correctly, as does H2 (red). H2 could be considered “better” in the sense that it is also furthest from both groups. H3 (green) fails to correctly classify the dots.
───
罷了,人們是否會大失所望耶?所謂『辨識』是多麼智慧之行為!怎麼可能只是『分類』而已勒??顯然那個『感知器模型』太簡略不能反映『真實』的吧!!
就讓我們從《易繫辭上》之『方以類聚,物以群分』講起︰
《易經‧系辭上傳》 第一章
天尊地卑,乾坤定矣。卑高以陳,貴賤位矣。 動靜有常,剛柔斷矣。方以類聚,物以群分,吉凶生矣。 在天成象,在地成形,變化見矣。鼓之以雷霆,潤之以風雨,日月運行,一寒一暑,乾道成男,坤道成女。乾知大始,坤作成物。乾以易知,坤以簡能。易則易知, 簡則易從。 易知則有親,易從則有功。 有親則可久,有功則可大。 可久則賢人之德,可大則賢人之業。易簡,而天下矣之理矣﹔天下之理得,而成位乎其中矣。
理想的概念模型是為著容易理解複雜的現象,方便模擬可能之狀況,簡化無益的計算!!
現實中,各種電路元件都有多多少少的非線性,常常只能在某些範圍內被線性化。而且一般的元件可能具有複合作用,比方說電容器也有電阻性等等。那麼在電路學中要用什麼方式描述實際電路中的物理組件的呢?建構一類理想元件的素材,用以表達千差萬別的實有元件,這樣不僅可以以少御多,更能促進理解,的確是個適切辦法。因此電路學中用著四種狀態變數,『電流』 current 、『電壓』 voltage 、『電荷量』 charge 與『磁通量』 magnetic flux 來建構『理想元件』的『定義』。通常物理上描述物質受到外部『刺激』 stimuli 產生的『響應』 response,稱之為『組構關係式』,一般由兩個物理量所構成。這樣看來,四種狀態變數可以定義六種關係式︰
【電流源】
【電壓源】
【電阻器】
【電容器】
【電感器】
【憶阻器】
此處的 是物理量 之間的函數關係。
長期以來,電路學裡的基本元件,祇有六中之五,一九七一年,美國加州大學伯克利分校的蔡少棠教授據此預想在電阻器、電容器與電感器之外,還應該存在電路的第四種基本元件,就是憶阻器。
『憶阻器』 memristor ,又叫做『記憶電阻』 memory resistors,是一種被動電子元件。如同電阻器,憶阻器能產生並維持一定的電流通過某個裝置。但是和電阻器不同處是︰憶阻器可以在關掉電源後,仍能『記憶』先前通過的電荷量,也就是說它的電流歷史,於是再次打開電源後,它的電阻值將是最後的記憶值。由於憶阻器這個現象將與『非平衡熱力學』的定律衝突,目前它的存在性尚在『爭議』中。在一個憶阻器中,磁通量 受到累積的電荷 所影響。這個磁通量隨著電荷的改變關係稱之為『憶阻值』,可以用方程式表示為
其後於二零零零年,研究人員在多種二元金屬氧化物以及鈣鈦礦結構的薄膜中發現了電場作用下的電阻變化,並將之應用到了下一代非揮發性記憶體,『阻抗存儲器』 RRAM 或稱 ReRAM 中。二零零八年四月,惠普 HP 發表了基於 的阻抗存儲器。
據聞有一些專家並不認為這就是『真的』憶阻器!!
─── 摘自《【Sonic π】電聲學之電路學《一》上》
如果請人『分辨』下圖『什麼是什麼』 ?
可能十分容易!假使請人描述『為什麼那像那』的呢??恐怕非常困難!!假使有人想要『定義』『4』的圖象像什麼︰
真不知這能不能作的哩!!??比方說那圖都是『4』這一類,因著模糊之『相似性』,人們總可以講︰所謂『4』中分來看,左有個『勾』,右有個『豎』與『勾』相交於『下』……
那麼這些『定義』之『屬性』將如何『判定』下圖之歸屬耶?
難道因為『四、九』西方金,所以『陰、陽』難辨乎??!!
於是 Michael Nielsen 先生於末章初節中也不得不論辯哉︰
……
The main part of the chapter is an introduction to one of the most widely used types of deep network: deep convolutional networks. We’ll work through a detailed example – code and all – of using convolutional nets to solve the problem of classifying handwritten digits from the MNIST data set:
───
或許真正令人驚訝的是,一個『人工神經網絡』竟然能『訓練』且『學習』將手寫阿拉伯數字『分類』那麼好的也!!!