Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 116
3 月 | 2018 | FreeSandal | 第 6 頁

教育和學習︰ Up《grade》【六.四】

真的『吃什麼補什麼』耶?

果然『科技無別於巫術』嘛?

難到『真理並非天生自明』乎?

設使不是自然而來!偏又不期而遇!如之何栽!

所謂

梅爾頻率倒譜係數

在聲音處理領域中,梅爾頻率倒譜(Mel-Frequency Cepstrum)是基於聲音頻率的非線性梅爾刻度(mel scale)的對數能量頻譜的線性變換。

梅爾頻率倒譜系數 (Mel-Frequency Cepstral Coefficients,MFCCs)就是組成梅爾頻率倒譜的係數。它衍生自音訊片段的倒頻譜(cepstrum)。倒譜和梅爾頻率倒譜的區別在於,梅爾頻率倒譜的頻帶劃分是在梅爾刻度上等距劃分的,它比用於正常的對數倒頻譜中的線性間隔的頻帶更能近似人類的聽覺系統。 這樣的非線性表示,可以在多個領域中使聲音訊號有更好的表示。例如在音訊壓縮中。

梅爾頻率倒譜係數(MFCC)廣泛被應用於語音識別的功能。他們由Davis和Mermelstein在1980年代提出,並在其後持續是最先進的技術之一。在MFCC之前,線性預測係數(LPCS)和線性預測倒譜系數(LPCCs)是自動語音識別的的主流方法。
MFCC通常有以下之過程:[1][2]

  1. 將一段語音訊號分解為多個訊框
  2. 將語音訊號預強化,通過一個高通濾波器
  3. 進行傅立葉轉換,將訊號轉換至頻域。
  4. 將每個訊框獲得的頻譜通過梅爾濾波器(三角重疊窗口),得到梅爾刻度
  5. 在每個梅爾刻度上提取對數能量。
  6. 對上面獲得的結果進行離散傅立葉反轉換,轉換到倒頻譜域。
  7. MFCC就是這個倒頻譜圖的幅度(amplitudes)。一般使用12個係數,與訊框能量疊加得13維的係數。

……

 

若只是『聲』量化為『所指』等長之好辦法,恐怕難了

Genre Recognition

觸類旁通矣〒

且借『他人言說』,『自己思考』體驗吧◎

DeadSimpleSpeechRecognizer

CNN based Minimal model for recognizing word

…… 來自

 

So you’ve classified MNIST dataset using Deep Learning libraries and want to do the same with speech recognition! Well continuous speech recognition is a bit tricky so to keep everything simple I am going to start with a simpler problem instead. Which is word recognition. I’ve seen a competition going on at Kaggle and couldn’t help but downloading the dataset.

If you think this blog post will make you an expert in Speech Recognition field please feel free to skip it. I am going to show you some quick techniques to be up and running in speech recognition area rather going deeper.

………

 

 

 

 

 

 

 

 

 

教育和學習︰ Up《grade》【六.三】

孔子學琴為何專注於一首??

孔子學琴於師襄子,襄子曰:『吾雖以擊磬為官,然能於琴,今子於琴已習,可以益矣。』孔子曰:『丘未得其數也。』有間,曰:『已習其數,可以益矣。』孔子曰:『丘未得其志也。』有間,曰 :『已習其志,可以益矣。』孔子曰:『丘未得其為人也。』有間 ,孔子有所謬然思焉,有所睪然高望而遠眺。曰:『丘迨得其為人矣。近黮而黑,頎然長,曠如望羊,奄有四方,非文王其孰能為此 ?』師襄子避席葉拱而對曰:『君子,聖人也,其傳曰《文王操》 。』 ── 《孔子家語‧辨樂解第三十五

─── 《SONIC Π 知音?!

 

神經網路的訓練,依賴大量數據以及資料,此乃傳習者之為難也。故而許多典型範例,都是用著開放的 MNIST 原始資料庫哩!

此事  Keras 亦不例外乎︰

Keras examples directory

Vision models examples

mnist_mlp.py Trains a simple deep multi-layer perceptron on the MNIST dataset.

mnist_cnn.py Trains a simple convnet on the MNIST dataset.

cifar10_cnn.py Trains a simple deep CNN on the CIFAR10 small images dataset.

cifar10_resnet.py Trains a ResNet on the CIFAR10 small images dataset.

conv_lstm.py Demonstrates the use of a convolutional LSTM network.

image_ocr.py Trains a convolutional stack followed by a recurrent stack and a CTC logloss function to perform optical character recognition (OCR).

mnist_acgan.py Implementation of AC-GAN (Auxiliary Classifier GAN) on the MNIST dataset

mnist_hierarchical_rnn.py Trains a Hierarchical RNN (HRNN) to classify MNIST digits.

mnist_siamese.py Trains a Siamese multi-layer perceptron on pairs of digits from the MNIST dataset.

mnist_swwae.py Trains a Stacked What-Where AutoEncoder built on residual blocks on the MNIST dataset.

mnist_transfer_cnn.py Transfer learning toy example.

 

嘗思孔子學琴專注一首之道,且以

W!O+ 的《小伶鼬工坊演義》︰神經網絡【MNIST】一

系列文章為之假借渲染。期盼聞一知十呦?

 

 

 

 

 

 

 

 

 

教育和學習︰ Up《grade》【六.二】

如何學習 Keras 且能快速上手呢?

何不就讀幾篇 Francois Chollet 現身說法文章,先有個通盤認識也!

Introducing Keras 2


Keras was released two years ago, in March 2015. It then proceeded to grow from one user to one hundred thousand.

Keras user growth

Hundreds of people have contributed to the Keras codebase. Many thousands have contributed to the community. Keras has enabled new startups, made researchers more productive, simplified the workflows of engineers at large companies, and opened up deep learning to thousands of people with no prior machine learning experience. And we believe this is just the beginning.

Now we are releasing Keras 2, with a new API (even easier to use!) that brings consistency with TensorFlow. This is a major step in preparation for the integration of the Keras API in core TensorFlow.

Many things have changed. This is your quick summary.


TensorFlow integration

Although Keras has supported TensorFlow as a runtime backend since December 2015, the Keras API had so far been kept separate from the TensorFlow codebase. This is changing: the Keras API will now become available directly as part of TensorFlow, starting with TensorFlow 1.2. This is a big step towards making TensorFlow accessible to its next million users.

Keras is best understood as an API specification, not as a specific codebase. In fact, going fowards there will be two separate implementations of the Keras spec: the internal TensorFlow one, available as tf.keras, written in pure TensorFlow and deeply compatible with all TensorFlow functionality, and the external multi-backend one supporting both Theano and TensorFlow (and likely even more backends in the future).

Similarly, Skymind is implementing part of the Keras spec in Scala as ScalNet, and Keras.js is implementing part of the Keras API in JavaScript, to be run in the browser. As such, the Keras API is meant to become the lingua franca of deep learning practitioners, a common language shared across many different workflows, independent of the underlying platform. A unified API convention like Keras helps with code sharing and research reproducibility, and it allows for larger support communities.

………

User experience design for APIs

Writing code is rarely just a private affair between you and your computer. Code is not just meant for machines; it has human users. It is meant to be read by people, used by other developers, maintained and built upon. Developers who produce better code, in greater quantity, when they are kept happy and productive, working with tools they love. Developers who unfortunately are often being let down by their tools, and left cursing at obscure error messages, wondering why that stupid library doesn’t do what they thought it would. Our tools have great potential to cause us pain, especially in a field as complex as software engineering.

User experience (UX) should be central in application programming interface (API) design. A well-designed API, making complicated tasks feel easy, will probably prevent a lot more pain in this world than a brilliant new design for a bedside lamp ever would. So why does API UX design so often feel like an afterthought, compared to even furniture design? Why is there a profound lack of design culture among developers?

keep the user in mind


Part of it is simply empathic distance. While you’re writing code alone in front of your computer, future users are a distant thought, an abstract notion. It’s only when you start sitting down next to your users and watch them struggle with your API that you start to realize that UX matters. And, let’s face it, most API developers never do that.

Another problem is what I would call “smart engineer syndrome”. Programmers tend to assume that end users have sufficient background and context — because themselves do. But in fact, end users know a tiny fraction of what you know about your own API and its implementation. Besides, smart engineers tend to overcomplicate what they build, because they can easily handle complexity. If you aren’t exceptionally bright, or if you are impatient, that fact puts a hard limit on how complicated your software can be — past a certain level, you simply won’t be able to get it to work, so you’ll just quit and start over with a cleaner approach. But smart, patient people? They can just deal with the complexity, and they build increasingly ugly Frankenstein monsters, that somehow still walk. This results in the worst kind of API.

One last issue is that some developers force themselves to stick with user-hostile tools, because they perceive the extra difficulty as a badge of honor, and consider thoughtfully-designed tools to be “for the n00bs”. This is an attitude I see a lot in the more toxic parts of the deep learning community, where most things tend to be fashion-driven and superficial. But ultimately, this masochistic posturing is self-defeating. In the long run, good design wins, because it makes its adepts more productive and more impactful, thus spreading faster than user-hostile undesign. Good design is infectious.

Like most things, API design is not complicated, it just involves following a few basic rules. They all derive from a founding principle: you should care about your users. All of them. Not just the smart ones, not just the experts. Keep the user in focus at all times. Yes, including those befuddled first-time users with limited context and little patience. Every design decision should be made with the user in mind.

Here are my three rules for API design.


1 – Deliberately design end-to-end user workflows.

Most API developers focus on atomic methods rather than holistic workflows. They let users figure out end-to-end workflows through evolutionary happenstance, given the basic primitives they provided. The resulting user experience is often one long chain of hacks that route around technical constraints that were invisible at the level of individual methods.

To avoid this, start by listing the most common workflows that your API will be involved in. The use cases that most people will care about. Actually go through them yourself, and take notes. Better yet: watch a new user go through them, and identify pain points. Ruthlessly iron out those pain points. In particular:

  • Your workflows should closely map to domain-specific notions that users care about. If you are designing an API for cooking burgers, it should probably feature unsurprising objects such as “patty”, “cheese”, “bun”, “grill”, etc. And if you are designing a deep learning API, then your core data structures and their methods should closely map to the concepts used by people familiar with the field: models/networks, layers, activations, optimizers, losses, epochs, etc.
  • Ideally, no API element should deal with implementation details. You do not want the average user to deal with “primary_frame_fn”, “defaultGradeLevel”, “graph_hook”, “shardedVariableFactory”, or “hash_scope”, because these are not concepts from the underlying problem domain, they are highly specific concepts that come from your internal implementation choices.
  • Deliberately design the user onboarding process. How are complete newcomers going to find out the best way to solve their use case with your tool? Have an answer ready. Make sure your onboarding material closely maps to what your users care about: don’t teach newcomers how your API is implemented, teach them how they can use it to solve their own problems.

………

Building Autoencoders in Keras

In this tutorial, we will answer some common questions about autoencoders, and we will cover code examples of the following models:

  • a simple autoencoder based on a fully-connected layer
  • a sparse autoencoder
  • a deep fully-connected autoencoder
  • a deep convolutional autoencoder
  • an image denoising model
  • a sequence-to-sequence autoencoder
  • a variational autoencoder

Note: all code examples have been updated to the Keras 2.0 API on March 14, 2017. You will need Keras version 2.0.0 or higher to run them.


What are autoencoders?

Autoencoder: schema

“Autoencoding” is a data compression algorithm where the compression and decompression functions are 1) data-specific, 2) lossy, and 3) learned automatically from examples rather than engineered by a human. Additionally, in almost all contexts where the term “autoencoder” is used, the compression and decompression functions are implemented with neural networks.

1) Autoencoders are data-specific, which means that they will only be able to compress data similar to what they have been trained on. This is different from, say, the MPEG-2 Audio Layer III (MP3) compression algorithm, which only holds assumptions about “sound” in general, but not about specific types of sounds. An autoencoder trained on pictures of faces would do a rather poor job of compressing pictures of trees, because the features it would learn would be face-specific.

2) Autoencoders are lossy, which means that the decompressed outputs will be degraded compared to the original inputs (similar to MP3 or JPEG compression). This differs from lossless arithmetic compression.

3) Autoencoders are learned automatically from data examples, which is a useful property: it means that it is easy to train specialized instances of the algorithm that will perform well on a specific type of input. It doesn’t require any new engineering, just appropriate training data.

To build an autoencoder, you need three things: an encoding function, a decoding function, and a distance function between the amount of information loss between the compressed representation of your data and the decompressed representation (i.e. a “loss” function). The encoder and decoder will be chosen to be parametric functions (typically neural networks), and to be differentiable with respect to the distance function, so the parameters of the encoding/decoding functions can be optimize to minimize the reconstruction loss, using Stochastic Gradient Descent. It’s simple! And you don’t even need to understand any of these words to start using autoencoders in practice.

………

 

 

 

 

 

 

 

 

教育和學習︰ Up《grade》【六.一】

Keras 的領頭羊是何許人耶?

Keras

Keras is an open source neural network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or MXNet.[1] Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. It was developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System),[2] and its primary author and maintainer is François Chollet, a Google engineer.

In 2017, Google’s TensorFlow team decided to support Keras in TensorFlow’s core library. Chollet explained that Keras was conceived to be an interface rather than a standalone machine-learning framework. It offers a higher-level, more intuitive set of abstractions that make it easy to develop deep learning models regardless of the computational backend used.[3] Microsoft added a CNTK backend to Keras as well, available as of CNTK v2.0.[4][5]

 

為什麼寫作《Deep Learning with Python》也!

Hugo Bowne-Anderson
December 18th, 2017
DEEP LEARNING

+2

An Interview with François Chollet

DataCamp’s Hugo Bowne-Anderson interviewed Keras creator and Google AI researcher François Chollet about his new book, “Deep Learning with Python”.

 

François Chollet is an AI & deep learning researcher, author of Keras, a leading deep learning framework for Python, and has a new book out, Deep Learning with Python. To coincide with the release of this book, I had the pleasure of interviewing François via e-mail. Feel free to reach out to us at @fchollet and @hugobowne.

 

在進入『 AI 殿堂』前,先經過『智慧爆炸』論戰洗禮︰

Medium

The impossibility of intelligence explosion

Transcendence (2014 science-fiction movie)

In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email surveytargeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility.

The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment — as seen in the science-fiction movie Transcendence (2014), for instance. Superintelligence would thus imply near-omnipotence, and would pose an existential threat to humanity.

This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems. I attempt to base my points on concrete observations about intelligent systems and recursive systems.

………

 

A reply to Francois Chollet on intelligence explosion

 |   |  Analysis

This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.”In response to critics of his essay, Chollet tweeted:

If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?

And he earlier tweeted:

Don’t be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views.

Chollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements.

As a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.

My reply to Chollet doesn’t try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion.

………

 

令人省思『必然遇上偶然』,可能難說乎★

 

金文大篆改

金文大篆變

教育的宗旨

』正思維的『誤謬』,『』化習性之『偏差』。

220px-Halperin

220px-Lets_make_a_deal_weekly_primetime

220px-2009lmadzonkgoat

一九六三年美國 NBC 電視台初次公演了由 Jay Stewart 和 Monty Hall 主持的『Let’s Make a Deal』成交約定遊戲。它有多種版本,典型的遊戲是︰

主持人向遊戲者展示了三扇門,其中一扇門之後是『樂透獎』,另外兩扇門之後是『安慰獎』。當然主持人事先就知道哪扇門背後有什麼獎品,遊戲過程分為三個階段︰
一、遊戲者先選擇一扇門,
二、主持人打開遊戲者未選之兩扇門中的某一扇安慰獎之門,
三.、主持人詢問著遊戲者是否仍堅持『原選之門』,還是願意改變選擇『另一扇未開的門』?這個『費疑猜』令人疲憊之 zonk 時刻已然到來!
到底是該
換,還是不換』的好呢??

一位『理性的』思考者也許會這樣論證︰

最初之時,每扇門後有『樂透獎』的機會都是 \frac {1}{3},所以『選中』的機會是 \frac {1}{3},『未選中』的機會是 \frac {2}{3}。然而現在主持人打開了一扇沒有『樂透獎』的門,這個『資訊』將使得未選中之『僅存之門』的機會成了 \frac {2}{3}。因此當然是『換的好』的了!

另一位『感性的』幸運者也許會這樣感覺︰

如果『會中』一選就中,如果『不會中』改選『也沒用』,所以還是維持原案『不換的好』的吧!

在『機會』的現象裡,人們因多次重複得到的某種『大數機率』,如果將之運用到『這一次』的選擇之時,那個『實際發生』的事實究竟是能不能『論斷』的呢?如果能,人們又為什麼會相信『莫非定律』是真的呢??

………

一九九零年英國倫敦大學的 Arnold Zuboff 教授發表了一篇寫於一九八六年的『One Self: The Logic of Experience』的論文,提出了『睡美人的問題』。

280px-Brewtnall_-_Sleeping_Beauty

220px-Dornröschen

睡美人被詳細告知細節,自願參加下面的實驗︰

周日她將被安排入睡,實驗過程中會被喚醒一次或者兩次,然後用一種失憶的藥,她將不會記得自己曾經被喚醒過。這個實驗中會擲一個公平的硬幣來決定它將採取的程序︰

如果硬幣的結果是『頭』,她只會在『禮拜一』被喚醒與訪談。
如果硬幣的結果是『尾』,她將會在『禮拜一』和『禮拜二』都被喚醒與訪談。

無論是上面哪種情況,她終會在『禮拜三』被喚醒,而且沒有訪談就結束了實驗。每次她被喚醒與訪談時,她將被問到︰你現在對『硬幣的結果是頭』的『相信度』是什麼?

這個問題至今爭論不休,『三分之一者』 Thirder 認為是 \frac{1}{3},『二分之一者』 Halfer 認為是 \frac{1}{2}。睡美人真的能有一個『正確答案』嗎?一個只擲一次頭尾兩種結果的硬幣,帶出可能一天或兩天的訪談,將要如何思考『機率』的先驗或後驗說法的呢?一般機率論是用『各種可能出現之狀況』 ── 樣本空間 ── 的『相對發生頻率』來作測度;如果不能測度時,或許用著『無差別』或說『無法區分』去假設它們相對發生頻率都『一樣』。這樣『樣本空間』與『測度假設』就是爭論的緣由的了。假使我們用硬幣結果集合 {頭,尾} 與訪談時間集合 {禮拜一,禮拜二},從公平硬幣角度來看這個問題中的事件機率︰

機率【頭,禮拜一】= \frac{1}{2}
機率【頭,禮拜二】= 0
機率【尾,禮拜一】= \frac{1}{4}
機率【尾,禮拜二】= \frac{1}{4}

這個『機率【頭,禮拜二】= 0』就是引發爭論的主焦點,因為它是一個『不可能』發生的事件。從機率的經驗事件取樣之觀點來看,也許在考慮『樣本空間』時根本該將之去除,然而這樣的一個『觀察者』又為什麼不該假設『所有可能發生事件』的『機率』不是相同的 \frac{1}{3} 呢??

─── 摘自《改不改??變不變!!

 

 

 

 

 

 

 

 

教育和學習︰ Up《grade》【六】

活在快速變遷的時代,或許更該慎選『API 骨架』耶?

Why use Keras?

There are countless deep learning frameworks available today. Why use Keras rather than any other? Here are some of the areas in which Keras compares favorably to existing alternatives.


Keras prioritizes developer experience

  • Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error.
  • This makes Keras easy to learn and easy to use. As a Keras user, you are more productive, allowing you to try more ideas than your competition, faster — which in turn helps you win machine learning competitions.
  • This ease of use does not come at the cost of reduced flexibility: because Keras integrates with lower-level deep learning languages (in particular TensorFlow), it enables you to implement anything you could have built in the base language. In particular, as tf.keras, the Keras API integrates seamlessly with your TensorFlow workflows.

Keras has broad adoption in the industry and the research community

With over 200,000 individual users as of November 2017, Keras has stronger adoption in both the industry and the research community than any other deep learning framework except TensorFlow itself (and Keras is commonly used in conjunction with TensorFlow).

You are already constantly interacting with features built with Keras — it is in use at Netflix, Uber, Yelp, Instacart, Zocdoc, Square, and many others. It is especially popular among startups that place deep learning at the core of their products.

Keras is also a favorite among deep learning researchers, coming in #2 in terms of mentions in scientific papers uploaded to the preprint server arXiv.org:

Keras has also been adopted by researchers at large scientific organizations, in particular CERN and NASA.


Keras makes it easy to turn models into products

Your Keras models can be easily deployed across a greater range of platforms than any other deep learning framework:

……

 

若說只靠手冊與指南

Keras: The Python Deep Learning library

You have just found Keras.

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, orTheano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Use Keras if you need a deep learning library that:

  • Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
  • Supports both convolutional networks and recurrent networks, as well as combinations of the two.
  • Runs seamlessly on CPU and GPU.

Read the documentation at Keras.io.

Keras is compatible with: Python 2.7-3.6.

………

 

怕不容易駕馭『 AI 的應用』乎!

故盼藉著讀本深度學習之 Keras 中介書

《 Deep Learning with Python (Manning Publications) 》

透過『練習範例』

Companion Jupyter notebooks for the book “Deep Learning with Python”

This repository contains Jupyter notebooks implementing the code samples found in the book Deep Learning with Python (Manning Publications). Note that the original text of the book features far more content than you will find in these notebooks, in particular further explanations and figures. Here we have only included the code samples themselves and immediately related surrounding comments.

These notebooks use Python 3.6 and Keras 2.0.8. They were generated on a p2.xlarge EC2 instance.

───

 

立能嫻熟概念之『表述法』也☆

 

 

 

 

 

 

 

 

 

 

輕。鬆。學。部落客