Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 116
3 | 3 月 | 2018 | FreeSandal

教育和學習︰ Up《grade》【六.一】

Keras 的領頭羊是何許人耶?

Keras

Keras is an open source neural network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or MXNet.[1] Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. It was developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System),[2] and its primary author and maintainer is François Chollet, a Google engineer.

In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than a standalone machine-learning framework. It offers a higher-level, more intuitive set of abstractions that make it easy to develop deep learning models regardless of the computational backend used.[3] Microsoft added a CNTK backend to Keras as well, available as of CNTK v2.0.[4][5]

 

為什麼寫作《Deep Learning with Python》也!

Hugo Bowne-Anderson
December 18th, 2017
DEEP LEARNING

+2

An Interview with François Chollet

DataCamp’s Hugo Bowne-Anderson interviewed Keras creator and Google AI researcher François Chollet about his new book, “Deep Learning with Python”.

 

François Chollet is an AI & deep learning researcher, author of Keras, a leading deep learning framework for Python, and has a new book out, Deep Learning with Python. To coincide with the release of this book, I had the pleasure of interviewing François via e-mail. Feel free to reach out to us at @fchollet and @hugobowne.

 

在進入『 AI 殿堂』前,先經過『智慧爆炸』論戰洗禮︰

Medium

The impossibility of intelligence explosion

Transcendence (2014 science-fiction movie)

In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email surveytargeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility.

The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment — as seen in the science-fiction movie Transcendence (2014), for instance. Superintelligence would thus imply near-omnipotence, and would pose an existential threat to humanity.

This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems. I attempt to base my points on concrete observations about intelligent systems and recursive systems.

………

 

A reply to Francois Chollet on intelligence explosion

 |   |  Analysis

This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.”In response to critics of his essay, Chollet tweeted:

If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?

And he earlier tweeted:

Don’t be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views.

Chollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements.

As a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.

My reply to Chollet doesn’t try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion.

………

 

令人省思『必然遇上偶然』,可能難說乎★

 

金文大篆改

金文大篆變

教育的宗旨

』正思維的『誤謬』,『』化習性之『偏差』。

220px-Halperin

220px-Lets_make_a_deal_weekly_primetime

220px-2009lmadzonkgoat

一九六三年美國 NBC 電視台初次公演了由 Jay Stewart 和 Monty Hall 主持的『Let’s Make a Deal』成交約定遊戲。它有多種版本,典型的遊戲是︰

主持人向遊戲者展示了三扇門,其中一扇門之後是『樂透獎』,另外兩扇門之後是『安慰獎』。當然主持人事先就知道哪扇門背後有什麼獎品,遊戲過程分為三個階段︰
一、遊戲者先選擇一扇門,
二、主持人打開遊戲者未選之兩扇門中的某一扇安慰獎之門,
三.、主持人詢問著遊戲者是否仍堅持『原選之門』,還是願意改變選擇『另一扇未開的門』?這個『費疑猜』令人疲憊之 zonk 時刻已然到來!
到底是該
換,還是不換』的好呢??

一位『理性的』思考者也許會這樣論證︰

最初之時,每扇門後有『樂透獎』的機會都是 \frac {1}{3},所以『選中』的機會是 \frac {1}{3},『未選中』的機會是 \frac {2}{3}。然而現在主持人打開了一扇沒有『樂透獎』的門,這個『資訊』將使得未選中之『僅存之門』的機會成了 \frac {2}{3}。因此當然是『換的好』的了!

另一位『感性的』幸運者也許會這樣感覺︰

如果『會中』一選就中,如果『不會中』改選『也沒用』,所以還是維持原案『不換的好』的吧!

在『機會』的現象裡,人們因多次重複得到的某種『大數機率』,如果將之運用到『這一次』的選擇之時,那個『實際發生』的事實究竟是能不能『論斷』的呢?如果能,人們又為什麼會相信『莫非定律』是真的呢??

………

一九九零年英國倫敦大學的 Arnold Zuboff 教授發表了一篇寫於一九八六年的『One Self: The Logic of Experience』的論文,提出了『睡美人的問題』。

280px-Brewtnall_-_Sleeping_Beauty

220px-Dornröschen

睡美人被詳細告知細節,自願參加下面的實驗︰

周日她將被安排入睡,實驗過程中會被喚醒一次或者兩次,然後用一種失憶的藥,她將不會記得自己曾經被喚醒過。這個實驗中會擲一個公平的硬幣來決定它將採取的程序︰

如果硬幣的結果是『頭』,她只會在『禮拜一』被喚醒與訪談。
如果硬幣的結果是『尾』,她將會在『禮拜一』和『禮拜二』都被喚醒與訪談。

無論是上面哪種情況,她終會在『禮拜三』被喚醒,而且沒有訪談就結束了實驗。每次她被喚醒與訪談時,她將被問到︰你現在對『硬幣的結果是頭』的『相信度』是什麼?

這個問題至今爭論不休,『三分之一者』 Thirder 認為是 \frac{1}{3},『二分之一者』 Halfer 認為是 \frac{1}{2}。睡美人真的能有一個『正確答案』嗎?一個只擲一次頭尾兩種結果的硬幣,帶出可能一天或兩天的訪談,將要如何思考『機率』的先驗或後驗說法的呢?一般機率論是用『各種可能出現之狀況』 ── 樣本空間 ── 的『相對發生頻率』來作測度;如果不能測度時,或許用著『無差別』或說『無法區分』去假設它們相對發生頻率都『一樣』。這樣『樣本空間』與『測度假設』就是爭論的緣由的了。假使我們用硬幣結果集合 {頭,尾} 與訪談時間集合 {禮拜一,禮拜二},從公平硬幣角度來看這個問題中的事件機率︰

機率【頭,禮拜一】= \frac{1}{2}
機率【頭,禮拜二】= 0
機率【尾,禮拜一】= \frac{1}{4}
機率【尾,禮拜二】= \frac{1}{4}

這個『機率【頭,禮拜二】= 0』就是引發爭論的主焦點,因為它是一個『不可能』發生的事件。從機率的經驗事件取樣之觀點來看,也許在考慮『樣本空間』時根本該將之去除,然而這樣的一個『觀察者』又為什麼不該假設『所有可能發生事件』的『機率』不是相同的 \frac{1}{3} 呢??

─── 摘自《改不改??變不變!!

 

 

 

 

 

 

 

 

輕。鬆。學。部落客