Rock It 《ML》難察因果!!

當人們已從『線性非時變系統理論』 LTI 中知道『因果關係』並非空洞無物!

※ 參讀︰

Causality

Causality (also referred to as causation,[1] or cause and effect) is what connects one process (the cause) with another process or state (the effect),[citation needed] where the first is partly responsible for the second, and the second is partly dependent on the first. In general, a process has many causes,[2] which are said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Causality is metaphysically prior to notions of time and space.[3][4]

Causality is an abstraction that indicates how the world progresses,[citation needed] so basic a concept that it is more apt as an explanation of other concepts of progression than as something to be explained by others more basic. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it.[5] Accordingly, causality is implicit in the logic and structure of ordinary language.[6]

In the English language, as distinct from Aristotle’s own language, Aristotelian philosophy uses the word “cause” to mean “explanation” or “answer to a why question”, including Aristotle‘s material, formal, efficient, and final “causes”; then the “cause” is the explanans for the explanandum. In this case, failure to recognize that different kinds of “cause” are being considered can lead to futile debate. Of Aristotle’s four explanatory modes, the one nearest to the concerns of the present article is the “efficient” one.

The topic of causality remains a staple in contemporary philosophy.

Theories

Counterfactual theories

Subjunctive conditionals are familiar from ordinary language. They are of the form, if A were the case, then B would be the case, or if A had been the case, then B would have been the case. Counterfactual conditionals are specifically subjunctive conditionals whose antecedents are in fact false, hence the name. However the term used technically may apply to conditionals with true antecedents as well.

Psychological research shows that people’s thoughts about the causal relationships between events influences their judgments of the plausibility of counterfactual alternatives, and conversely, their counterfactual thinking about how a situation could have turned out differently changes their judgments of the causal role of events and agents. Nonetheless, their identification of the cause of an event, and their counterfactual thought about how the event could have turned out differently do not always coincide.[19] People distinguish between various sorts of causes, e.g., strong and weak causes.[20] Research in the psychology of reasoning shows that people make different sorts of inferences from different sorts of causes, as found in the fields of cognitive linguistics[21] and accident analysis[22][23] for example.

In the philosophical literature, the suggestion that causation is to be defined in terms of a counterfactual relation is made by the 18th-century Scottish philosopher David Hume. Hume remarks that we may define the relation of cause and effect such that “where, if the first object had not been, the second never had existed.”[24]

More full-fledged analysis of causation in terms of counterfactual conditionals only came in the 20th century after development of the possible world semantics for the evaluation of counterfactual conditionals. In his 1973 paper “Causation,” David Lewis proposed the following definition of the notion of causal dependence:[25]

An event E causally depends on C if, and only if, (i) if C had occurred, then E would have occurred, and (ii) if C had not occurred, then E would not have occurred.

Causation is then defined as a chain of causal dependence. That is, C causes E if and only if there exists a sequence of events C, D1, D2, … Dk, E such that each event in the sequence depends on the previous.

Note that the analysis does not purport to explain how we make causal judgements or how we reason about causation, but rather to give a metaphysical account of what it is for there to be a causal relation between some pair of events. If correct, the analysis has the power to explain certain features of causation. Knowing that causation is a matter of counterfactual dependence, we may reflect on the nature of counterfactual dependence to account for the nature of causation. For example, in his paper “Counterfactual Dependence and Time’s Arrow,” Lewis sought to account for the time-directedness of counterfactual dependence in terms of the semantics of the counterfactual conditional.[26] If correct, this theory can serve to explain a fundamental part of our experience, which is that we can only causally affect the future but not the past.

Probabilistic causation

Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer or emphysema. As a result, many turn to a notion of probabilistic causation. Informally, A (“The person is a smoker”) probabilistically causes B (“The person has now or will have cancer at some time in the future”), if the information that A occurred increases the likelihood of Bs occurrence. Formally, P{B|A}≥ P{B} where P{B|A} is the conditional probability that B will occur given the information that A occurred, and P{B}is the probability that B will occur having no knowledge whether A did or did not occur. This intuitive condition is not adequate as a definition for probabilistic causation because of its being too general and thus not meeting our intuitive notion of cause and effect. For example, if A denotes the event “The person is a smoker,” B denotes the event “The person now has or will have cancer at some time in the future” and C denotes the event “The person now has or will have emphysema some time in the future,” then the following three relationships hold: P{B|A} ≥ P{B}, P{C|A} ≥ P{C} and P{B|C} ≥ P{B}. The last relationship states that knowing that the person has emphysema increases the likelihood that he will have cancer. The reason for this is that having the information that the person has emphysema increases the likelihood that the person is a smoker, thus indirectly increasing the likelihood that the person will have cancer. However, we would not want to conclude that having emphysema causes cancer. Thus, we need additional conditions such as temporal relationship of A to B and a rational explanation as to the mechanism of action. It is hard to quantify this last requirement and thus different authors prefer somewhat different definitions.[citation needed]

Causal calculus

When experimental interventions are infeasible or illegal, the derivation of cause effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows in causal graphs such as Bayesian networks or path diagrams. The theory underlying these derivations relies on the distinction between conditional probabilities, as in \displaystyle P(cancer|smoking) , and interventional probabilities, as in \displaystyle P(cancer|do(smoking)) . The former reads: “the probability of finding cancer in a person known to smoke, having started, unforced by the experimenter, to do so at an unspecified time in the past”, while the latter reads: “the probability of finding cancer in a person forced by the experimenter to smoke at a specified time in the past”. The former is a statistical notion that can be estimated by observation with negligible intervention by the experimenter, while the latter is a causal notion which is estimated in an experiment with an important controlled randomized intervention. It is specifically characteristic of quantal phenomena that observations defined by incompatible variables always involve important intervention by the experimenter, as described quantitatively by the Heisenberg uncertainty principle.[vague] In classical thermodynamics, processes are initiated by interventions called thermodynamic operations. In other branches of science, for example astronomy, the experimenter can often observe with negligible intervention.

The theory of “causal calculus”[27] permits one to infer interventional probabilities from conditional probabilities in causal Bayesian networks with unmeasured variables. One very practical result of this theory is the characterization of confounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect of \displaystyle X on \displaystyle Y is any set of non-descendants of \displaystyle X that \displaystyle d-separate \displaystyle X from \displaystyle Y after removing all arrows emanating from \displaystyle X . This criterion, called “backdoor”, provides a mathematical definition of “confounding” and helps researchers identify accessible sets of variables worthy of measurement.

………

 

也已經明白辛普森悖論現象︰

Simpson’s paradox

Simpson’s paradox for quantitative data: a positive trend ( ,  ) appears for two separate groups, whereas a negative trend ( ) appears when the groups are combined.

Simpson’s paradox (or Simpson’s reversal, Yule–Simpson effect, amalgamation paradox, or reversal paradox[1]), is a phenomenon in probability and statistics, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined.

This result is often encountered in social-science and medical-science statistics[2][3][4] and is particularly problematic when frequency data is unduly given causal interpretations.[5] The paradoxical elements disappear when causal relations are brought into consideration.[6] It has been used to try to inform the non-specialist or public audience about the kind of misleading results mis-applied statistics can generate.[7][8] Martin Gardner wrote a popular account of Simpson’s paradox in his March 1976 Mathematical Games column in Scientific American.[9]

Edward H. Simpson first described this phenomenon in a technical paper in 1951,[10] but the statisticians Karl Pearson et al., in 1899,[11] and Udny Yule, in 1903,[12] had mentioned similar effects earlier. The name Simpson’s paradox was introduced by Colin R. Blyth in 1972.[13]

………

 

焉能不想了解『機器心智』能否超越人類哩☺

What are the limits of machine learning, and is there a possibility to make robots learn languages?

Sridhar Mahadevan, Fellow of AAAI

 

There are plenty of well known limitations of machine learning. These shortcomings are usually associated with specific formal ways of defining ML paradigms.

  1. As the original question mentioned language, let’s begin with a classical result from Gold: the set of context-free languages is not learnable from positive examples. Let’s unpack this theorem and explain its deep impact on linguists like Noam Chomsky. First, the concept of learning here is formalized as “identification in the limit with zero error”. So, imagine that I as a teacher choose a particular context-free language in my head, and you as the learner have to guess what the language is (say by inferring the context-free grammar that generates it). You can ask me for as many strings that are part of the specific CFL (viewed as a set). Identification in the limit means that even if you spend as much computational power as you need and take as much time as you need, there will never come a time when you would have correctly guessed an arbitrary CFL from a potentially infinite series of examples. This result proved by Gold in 1967 (“Language identification in the limit”, Information and Control, vol. 10, pp. 447–474) was as stunning in its impact in machine learning as Gödel’s incompleteness theorem was to computation and logic. It was well known that children by and large only get positive examples of their native language (English, Chinese, Hindi, Hebrew etc.). So, linguists realized that since children learn language by 3 or 4, there must be severe innate constraints on the space of learnable languages. There has been a five decade long search for this so called “universal grammar”. It is still under active research.

……

5.  The last and most recent breakthrough is the work on causal models, principally the work by Pearl in 2000. Here was another blow to the power of statistical learning. What Pearl showed convincingly in his book and many papers is that statistical learning is fundamentally limited. It can not discover causal structures. The simple property that diseases cause symptoms, not the other way around, or that lightning causes thunder, cannot be discovered by any statistical learner, no matter how many layers is present in a deep learning neural network. The fundamental problem is representational: one cannot express causality using probability theory. You need extra probabilistic machinery to discover causality (e.g., Pearl’s do-calculus, Rubin and Neyman’s potential outcomes, or Fisher’s randomization protocols, all of which go beyond traditional statistics).

………

 

 

 

 

 

 

 

Rock It 《ML》保留偏見??

Li Bai

A Quiet Night Thought

In front of my bed, there is bright moonlight.
It appears to be frost on the ground.
I lift my head and gaze at the August Moon,
I lower my head and think of my hometown.

 

Contemplation

Moon twilight approaches, coating the ground through the window,
Resembles a touch of frost,
Moon at the window,
Taking me back to where I am from.

李白

静夜思

床前明月光
疑是地上霜
舉頭望明月
低頭思故鄉

假使將李白的《靜夜思》翻譯成英文,藉由『中英對照』,是否更能『理解』原作之『意境』呢?還是會少了點『』的『味道』??或許這個『利弊得失』就落在︰

『文化』之『盲點』,常顯現在『意義』的『忽略』之中。

『人文』之『偏見』,普遍藏於『字詞』之『情感』之內。

故而同一『內容』的多種『語言文本』,也許可見那『通常之所不見』

─── 《邂逅 W!o ?!

 

Aurélien Géron 似輕描淡寫沒『代表性』的『訓練資料』容易造成一般化 generalize 預測錯誤︰

 

實則強調不要發生『取樣偏見』才是『訓練關鍵』乎?

一如維基百科『機器學習』詞條所言︰

Limitations

Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[59][60][61] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[62]

In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[63] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of investment.[64][65]

Bias

Machine learning approaches in particular can suffer from different data biases. In healthcare data, measurement errors can often result in bias of machine learning applications.[66] A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[67] Language models learned from data have been shown to contain human-like biases.[68][69] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[70][71] In 2015, Google photos would often tag black people as gorillas,[72] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorilla from the training data, and thus was not able to recognize real gorillas at all.[73] Similar issues with recognizing non-white people have been found in many other systems.[74] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[75] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[76]

─── Wikipedia Machine learning

 

人們應當要嚴肅的看待這個問題呦!

如果人工智慧演算法之學習能力就像人類一樣的好,甚至還更好!那麼面對亙古至今的星空數據,它會歸結出

『地心說』︰

托勒密總結了希臘天文學的成就,寫成《天文學大成》十三卷。其中確定了一年的持續時間,編制了星表,說明旋進折射引起的修正,給出日月蝕的計算方法等。他利用希臘天文學家們特別是喜帕恰斯(Hipparchus,又譯伊巴谷)的大量觀測與研究成果,把各種用均輪和本輪解釋天體運動的地心學說給以系統化的論證,後世遂把這種地心體系冠以他的名字,稱為托勒密地心體系。

巨著《天文學大成》十三卷是當時天文學的百科全書,直到克卜勒的時代,都是天文學家的必讀書籍。《地理學指南》八卷,是他所繪的世界地圖的說明書,其中也討論到天文學原則。他還著有《光學》五卷,其中第一卷講述的關係,第二卷說明可見條件雙眼效應,第三卷講平面鏡曲面鏡反射及太陽中午與早晚的視徑大小問題,第五卷試圖找出折射定律,並描述了他的實驗,討論了大氣折射現象。此外,尚有年代學和占星學方面的著作等。

托勒密體系的宇宙圖

─── Wikipedia 克勞狄烏斯·托勒密

 

或是

『日心說』呢?

克卜勒定律克卜勒所發現、關於行星運動的定律。他於1609年在他出版的《新天文學》科學雜誌上發表了關於行星運動的兩條定律 ,又於1618年,發現了第三條定律。

克卜勒幸運地得到了著名丹麥天文學家第谷·布拉赫所觀察與收集、且非常精確的天文資料。大約於1605年,根據布拉赫的行星位置資料,克卜勒發現行星的移動遵守著三條相當簡單的定律。同年年底,他撰寫完成了發表文稿。但是,直到1609年,才在《新天文學》科學雜誌發表,這是因為布拉赫的觀察數據屬於他的繼承人,不能隨便讓別人使用,因此產生的一些法律糾紛造成了延遲。

在天文學與物理學上、克卜勒的定律給予亞里士多德派與托勒密派極大的挑戰。他主張地球是不斷地移動的;行星軌道不是圓形epicycle)的,而是橢圓形的;行星公轉的速度不等恆。這些論點,大大地動搖了當時的天文學與物理學。經過了幾乎一個世紀披星戴月,廢寢忘食的研究,物理學家終於能夠運用物理理論解釋其中的奧秘。艾薩克·牛頓應用他的第二定律萬有引力定律,在數學上嚴格地証明了克卜勒定律,也讓人們了解了其中的物理意義。

圖示遵守克卜勒行星運動定律的兩個行星軌道。 (1)行星軌道是橢圓軌道。第一個行星的軌道焦點是f1與f2,第二個行星的軌道焦點是f1與f3。太陽的位置是在點f1。 (2)A1與A2是兩個面積相等的陰影區域。太陽與第一個行星的連線,掃過這兩個陰影區域,所需的時間相等。(3)各個行星繞太陽公轉週期的比率為a13/2:a23/2 ;這裡,a1與a2分別為第一個行星與第二個行星的半長軸長度。

克卜勒定律

克卜勒的三條行星運動定律改變了整個天文學,徹底摧毀了托勒密複雜的宇宙體系,完善並簡化了哥白尼日心說

克卜勒第一定律

克卜勒第一定律,也稱為橢圓定律、軌道定律:每一個行星都沿各自的橢圓軌道環繞太陽,而太陽則處在橢圓的一個焦點中。[1]

克卜勒第二定律

克卜勒第二定律,也稱為等面積定律:在相等時間內,太陽運動著的行星的連線所掃過的面積都是相等的。[1]

這一定律實際揭示了行星繞太陽公轉的角動量守恆。用公式表示為

\displaystyle S_{AB}=S_{CD}=S_{EK} 

克卜勒第三定律

克卜勒第三定律,也稱為週期定律:各個行星太陽公轉週期平方和它們的橢圓軌道半長軸立方正比[1]

由這一定律不難導出:行星與太陽之間的重力與半徑的平方成反比 。這是艾薩克·牛頓萬有引力定律的一個重要基礎。

用公式表示為

\displaystyle {\frac {\tau ^{2}}{a^{3}}}=K

這裡, \displaystyle a 是行星公轉軌道半長軸, \displaystyle \tau  是行星公轉週期, \displaystyle K 是常數。

─── Wikipedia 克卜勒定律

 

這時若問為什麼克卜勒沒有發現『萬有引力』定律呢??

只怕那時還沒有牛頓『三大運動定律』吧!!

此後人們相信

行之於地上的原理,也行之於天上◎

但反過來說

我們真的知道了嗎☁

 

曾經科學附屬於哲學,如今有一些哲學家嘗試棌用科學的研究方法來進行哲學研究,所謂的『實驗哲學』experimental philosophy 隨之因勢而起,有人說它簡稱作『 x-phi 』。一九八零年美國的分析哲學家索爾‧ 阿倫‧克里普克 Saul Aaron Kripke 出版了一本名為《命名與必然性》Naming and Necessity 的書。在該書中他虛構了一個關于『哥德爾和施密特』的故事︰

假使『哥德爾』實際上不是『哥德爾定理』的發現者,而是一個叫做『施密特』的人發現了這個定理。出於某種原因,哥德爾以一種莫名的方法得到了朋友施密特之手稿,於是人們便將這個發現歸之於哥德爾了。

然而從分析哲學中的羅素之『摹狀詞』theory of descriptions 理論來看,當人們使用『哥德爾』一詞時,事實上『指稱』denote 的應該是『施密特』── 那個發現算術系統的不完備定理之人 ──。不過歐美哲學家卻普遍的直覺認為︰這個故事的大部分讀者幾乎都會同意『哥德爾』這個詞事實上並不是指稱『施密特』的。任何宣稱『它是的』之指稱理論最後都會被認定是錯誤的。因此實驗哲學家馬克亨利 E. Machery 、馬倫 R. Mallon 、尼克爾斯斯蒂克等就這個問題進行了一項實驗研究。他們將這個虛構的『哥德爾和施密特的故事』呈現給所有的受試者 ── 美國學生和香港學生兩類 ──。

實驗結果是︰絕大部分的美國學生認同上述的哲學家『直覺』,然而香港學生則顯現了一種相異的回應,其中大多數認為『哥德爾』這個詞的確指的是『施密特』。

或許『文化的差異將產生判斷之不同』的『直覺』自然會預期這個『結果』的吧!!

─── 摘自《思想實驗!!

 

 

 

 

 

 

Rock It 《ML》好挑戰!

細細品嚐了 Aurélien Géron 所寫通章文本,注目於這個標題︰

─── Hands-on Machine Learning with Scikit-Learn and TensorFlow  Ch.1  

當真好挑戰也!蓋連分辨『好』、『壞』東東都不容易呦!!

比方說,邊長為 a,b,c 的三角形,若滿足 c^2 = a^2 + b^2 ,就是一個直角三角形。

那麼給與一些

畢氏三元數

畢氏三元數,又名商高數勾股數(Pythagorean triple),是由三個正整數組成的數組;能符合畢氏定理(畢式定理)「 \displaystyle a^{2}+b^{2}=c^{2}」之中, \displaystyle (a,b,c) 的正整數解。而且,基於畢氏定理的逆定理,任何邊長是畢氏三元數組的三角形都是直角三角形

如果 \displaystyle (a,b,c) 是畢氏三元數,它們的正整數倍數,也是畢氏三元數,即 \displaystyle (na,nb,nc) 也是畢氏三元數。若果 \displaystyle (a,b,c) 三者互質(它們的最大公因數是 1),它們就稱為素畢氏三元數

 

數據集 (a,b,c)

以下是小於 100 的素畢氏三元數:

\displaystyle a \displaystyle b \displaystyle c
3 4 5
5 12 13
7 24 25
8 15 17
9 40 41
11 60 61
12 35 37
13 84 85
16 63 65
20 21 29
28 45 53
33 56 65
36 77 85
39 80 89
48 55 73
65 72 97

 

我們能否訓練一個

感知器

定義

感知器使用特徵向量來表示的前饋神經網絡,它是一種二元分類器,把矩陣上的輸入 \displaystyle x(實數值向量)映射到輸出值 \displaystyle f(x) 上(一個二元的值)。

\displaystyle f(x)={\begin{cases}1&{\text{if }}w\cdot x+b>0\\0&{\text{else}}\end{cases}}

\displaystyle w 是實數的表示權重的向量, \displaystyle w\cdot x 是點積。 \displaystyle b 是偏置,一個不依賴於任何輸入值的常數。偏置可以認為是激勵函數的偏移量,或者給神經元一個基礎活躍等級。

\displaystyle f(x) (0或1)用於對 \displaystyle x 進行分類,看它是肯定的還是否定的,這屬於二元分類問題。如果 \displaystyle b 是負的,那麼加權後的輸入必須產生一個肯定的值並且大於 \displaystyle -b ,這樣才能令分類神經元大於閾值0。從空間上看,偏置改變了決策邊界的位置(雖然不是定向的)。

由於輸入直接經過權重關係轉換為輸出,所以感知器可以被視為最簡單形式的前饋式人工神經網絡。

 

回答『是』或『不是』畢氏三元數呢?

又該怎麼談這個數據集品質好壞呢??

即使我們可以產生無窮畢氏三元數

找出畢氏三元數

以下的方法可用來找出畢氏三元數。設 \displaystyle m>n \displaystyle m \displaystyle n 均是正整數,

\displaystyle a=m^{2}-n^{2}
\displaystyle b=2mn
\displaystyle c=m^{2}+n^{2}

\displaystyle m \displaystyle n 互質,而且 \displaystyle m \displaystyle n 為一奇一偶,計算出來的 \displaystyle (a,b,c) 就是素畢氏三元數。(若 \displaystyle m \displaystyle n 都是奇數\displaystyle (a,b,c)  就會全是偶數,不符合互質。)

所有素畢氏三元數可用上述列式當中找出,這亦可推論到數學上存在無窮多的素畢氏三元數。

 

有助於訓練嘛!

反思 (m,n) 在第一象限之分佈,那個 \alpha \cdot a + \beta \cdot b + \gamma c + \delta > 0 有什麼意思也!!

如果有人先於虎克研究『非虎克型』材料,它會發現

虎克定律

虎克定律/胡克定律(Hooke’s law),是力學彈性理論中的一條基本定律,內容:固體材料後,應力應變(單位變形量)成線性關係,滿足此定律的材料:線彈性/虎克型(Hookean)

從物理的角度看,虎克定律源於多數固體(或孤立分子)內部的原子在無外載作用下處於穩定平衡的狀態。

許多實際材料,如一根長度為 \displaystyle L 、橫截面積 \displaystyle A 稜柱形棒,在力學上都可以用虎克定律來模擬——其單位伸長(或縮減)量 \displaystyle \varepsilon 應變)在常係數 \displaystyle E (稱為彈性模量)下,與拉(或壓)應力 \displaystyle \sigma 成正比例,即:

\displaystyle \sigma =E\varepsilon

\displaystyle \Delta L={\frac {1}{E}}\times L\times {\frac {F}{A}}={\frac {1}{E}}\times L\times \sigma

\displaystyle \Delta L :總伸長(縮減)量。虎克定律用17世紀英國物理學家羅伯特·虎克的名字命名。虎克提出該定律的過程頗有趣味,他於1676年發表了一句拉丁語字謎,謎面是:ceiiinosssttuv。兩年後他公布了謎底是:ut tensio sic vis,意思是「力如伸長(那樣變化)」(見參考文獻[1]),這正是虎克定律的中心內容。

虎克定律僅適用於特定加載條件下的部分材料。鋼材在多數工程應用中都可視為線彈性材料,在其彈性範圍內(即應力低於屈服強度時)虎克定律都適用。另外一些材料(如材)則只在彈性範圍內的一部分區域行為符合虎克定律。對於這些材料需要定義一個應力線性極限,在應力低於該極限時線性描述帶來的誤差可以忽略不計。

還有一些材料在任何情況下都不滿足虎克定律(如橡膠),這種材料稱為「非虎克型」(neo-hookean)材料。橡膠的剛度不僅和應力水平相關,還對溫度和加載速率十分敏感。

虎克定律在磅秤製造、應力分析和材料模擬等方面有廣泛的應用。

 

嗎??!!

 

 

 

 

 

 

 

 

Rock It 《ML》是什麼?

『機器學習』 Machine learning 是什麼? Aurélien Géron 如是說︰

─── Hands-on Machine Learning with Scikit-Learn and TensorFlow  Ch.1  

 

這樣言簡意賅!若不整章前後文本通讀多遍,恐怕難了其意指哩?

考之以維基百科詞條,這麼講︰

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task. Machine learning algorithms build a mathematical model of sample data, known as “training data“, in order to make predictions or decisions without being explicitly programmed to perform the task.[1][2]:2 Machine learning algorithms are used in the applications of email filtering, detection of network intruders, and computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning.[3][4] In its application across business problems, machine learning is also referred to as predictive analytics.

─── Wikipedia Machine learning

 

或許多了些領會!

再參閱 Adam Geitgey 的解釋︰

What is machine learning?

Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Instead of writing code, you feed data to the generic algorithm and it builds its own logic based on the data.

For example, one kind of algorithm is a classification algorithm. It can put data into different groups. The same classification algorithm used to recognize handwritten numbers could also be used to classify emails into spam and not-spam without changing a line of code. It’s the same algorithm but it’s fed different training data so it comes up with different classification logic. 

This machine learning algorithm is a black box that can be re-used for lots of different classification problems.

“Machine learning” is an umbrella term covering lots of these kinds of generic algorithms.

─── Machine Learning is Fun! Part1 , Adam Geitgey

 

大概能體會『機器學習』 Machine learning 是什麼了吧!?

莫非這些不同文本,此時已經『訓練』我們的大腦,能分辨《ML》是什麼?不是什麼耶?!

而那個到處都出現的『垃圾郵件』,就是『機器學習』經典範例也☆

如果可以藉著堅實之『語詞概念』網絡,架構一門學問的範疇︰

 

也許離『說行話』的日子不遠了☺

還沒打包行李者,請趕快。船就要開了。

sudo pip3 install scikit-learn

sudo apt-get install jupyter-notebook

git clone https://github.com/ageron/handson-ml

 

※ 註︰先睹為快

 

scikit-learn

Machine Learning in Python

  • Simple and efficient tools for data mining and data analysis
  • Accessible to everybody, and reusable in various contexts
  • Built on NumPy, SciPy, and matplotlib
  • Open source, commercially usable – BSD license

User Guide .pdf

 

 

 

 

 

 

 

 

Rock It 《ML》漫談

現在已經在 ROCK64 上,建置了陽春 Python3 神經網路環境,如何說說『ML』機器學習呢?心想︰雖也曾讀過一些大部頭的書,裡面多半是數學呦!既然人工智慧將興,何不就寫點科普漫談耶?一時不能平地起高樓也!突然腦海裡浮現那個自由書架

/awesome-machine-learning

The following is a list of free, open source books on machine learning, statistics, data-mining, etc.

 

記得內有一本

深入淺出的書,而且筆記本、程式碼具足︰

/handson-ml

A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in python using Scikit-Learn and TensorFlow.

Machine Learning Notebooks

This project aims at teaching you the fundamentals of Machine Learning in python. It contains the example code and solutions to the exercises in my O’Reilly book Hands-on Machine Learning with Scikit-Learn and TensorFlow:

book

 

Simply open the Jupyter notebooks you are interested in:

  • Using jupyter.org’s notebook viewer
  • or by cloning this repository and running Jupyter locally. This option lets you play around with the code. In this case, follow the installation instructions below.

 

正好當作底本,寫寫 ㄎㄎ 樂趣吧☺