Rock It 《ML》JupyterLab 【丁】Code《七》語義【四】III‧中

什麼是『知識』?長久以來,西方的主流思潮中有一個稱之為『JTB』 justified true belief  的『經確證的真信念』之理論,它是這麼定義『知識』與『知道』的︰

The JTB account of knowledge is the claim that knowledge can be conceptually analyzed as justified true belief — which is to say that the meaning of sentences such as “Smith knows that it rained today” can be given with the following set of necessary and sufficient conditions:

A subject S knows that a proposition P is true if and only if:

P is true, and
S believes that P is true, and
S is justified in believing that P is true

JTB 理論對於知識的解釋是︰宣稱知識可以概念解析為經確證的真信念 ── 這就是說『史密斯知道今天下雨』一句話的意義,可以用如下之充份與必要的條件來給定︰

 

 

一位主體 S 知道一個命題 P 是真的,若且唯若︰

一、 P 是真的,且
二、S 相信 P 是真的,而且
三、S 有經確證的真信念相信P 是真的

在這個定義下『知道□』意味著有『□的知識』,因為『知識』是由『真的命題』構成,所以必須有第一條件;如果某甲 『聽說』過 □ ,但是不相信,或者以為它是『假的』,當然不能說他知道□;再者雖說某甲相信□,卻出於無『理據』,比方說從『不知哪裡』得來的,一但『爭論』將無法『辯護其說』,所以也不能講他真知道□。如此說來,這個定義該是很完善的了,但是卻受到美國哲學家 Edmund Gettier 的反駁,他說即使上述的三條要件都得到了滿足,有些情況下我們仍然不能聲稱『某甲知道□』── 這就是知名的『葛梯爾問題』 ──。如今葛梯爾問題之多都成『問題集』了,於此就說一個經典的『空地上的奶牛』 The Cow in the field 問題︰

一位農民正擔心自己獲獎的奶牛走失了;這時送奶工到了農場,告訴他說︰不必擔心,他看到那頭奶牛在附近的一塊空地上。雖然農民很相信送奶工,但是他還是親自的望了望,見著了熟悉的黑白相間之物,感覺滿意的放下心來。隔了一會兒,送奶工走過那塊空地想再次確認,他發現那頭奶牛確實是在那裡,不過現在它躲進樹林裡了,並且空地上還有著一大張黑白相間的紙纏在樹上。顯然是,農民把這張紙錯認成那頭奶牛的了;於是問題就來了,雖然奶牛一直都在空地上,假使農民說自己知道奶牛在空地上時,此時這句話是正確的嗎?

─── 《基因改寫 ── THUE 改寫系統之補充《二》

 

如果一個人相信費馬 x^n + y^n = z^n ,在 n \ge 3 時,沒有整數解。那麼他會寫一個程式測試 (a,b,c) \ if \ a^n + b^n == c^n, \ and \ n\ge3 嗎?

費馬閱讀代數之父丟番圖之《算術》,是否真的發現

x^n + y^n = z^n \ , \ n \geq 3

沒有『整數解』之巧妙證明?若有人閱讀安德魯‧懷爾斯 Andrew John Wiles 以及其學生理查‧泰勒 Richard Taylor 的晦澀『證明』 ,能不能另闢蹊徑,找到容易理解的闡釋?『簡單』的提問往往卻是『不容易』回答!更別說數理邏輯之精深,讓人嘆為觀止矣!

─── 摘自《萬象在說話︰簡單問不容易答

 

假使今天仍無法『證明』,那麼不該寫那個測試程式嘛!

故而得問所謂『知識』難到不會改寫一個人的『信念』耶!?

人類果真不能『學 How To 學』乎?!

所以機器學習不得不面對如何

Jürgen Schmidhuber‘s page on

LEARNING TO LEARN

METALEARNING MACHINES AND
RECURSIVE SELF-IMPROVEMENT

Most machine learning researchers focus on domain-specific learning algorithms. Can we also construct general purpose learning algorithms, in particular, meta-learning algorithms that can learn better learning algorithms? This question has been a main drive of Schmidhuber’s research since his diploma thesis on metalearning in 1987 [1], where he applied Genetic Programming (GP) to itself, to recursively evolve better GP methods.

Metalearning (or Meta-Learning) means learning the credit assignment method itself through self-modifying code. Metalearning may be the most ambitious but also the most rewarding goal of machine learning. There are few limits to what a good metalearner will learn. Where appropriate it will learn to learn by analogy, by chunking, by planning, by subgoal generation, by combinations thereof – you name it.

Schmidhuber’s recent Gödel machine is the first fully self-referential optimaluniversal metalearner, typically using the somewhat `less universal’ Optimal Ordered Problem Solver for finding provably optimal self-improvements.

 

也☺

關鍵就在掌握『改寫』之中矣☆