W!o+ 的《小伶鼬工坊演義》︰神經網絡【深度學習】一

少所見則多所怪,見駱駝言馬腫背。韓愈作『五原』論︰《原道》 、《原性》、《原人》、《原毀》、《原鬼》。之所以終於

【原鬼】

李石曰:「退之作《原鬼》,與晉阮千里相表裏。至作《羅池碑》欲以鬼威喝人,是為子厚求食也。《送窮文》雖出游戲,皆自叛其說也。退之以長慶四年寢疾,帝遣神召之曰:『骨使世與韓氏相仇,欲同力討之,天帝之兵欲行陰誅,乃更藉人力乎?』當是退之數窮識亂,為鬼所乘,不然,平生強聒,至死無用。」

有嘯於梁,「於梁」、「於堂」下,一本各有「者」。從而燭之,無見也。斯鬼乎?曰:非也,鬼無聲。有立於堂,從而視之,無見也。斯鬼乎?曰:非也,鬼無形。有觸吾躬,從而執之,無得也。斯鬼乎?曰:非也,鬼無聲與形,安有氣。「鬼無聲與形」上,或有「鬼無氣」三字,非是。曰:鬼無聲也,無形也,無氣也,果無鬼乎?曰:有形而無聲者,物有之矣,土石是也;有聲而無形者,物有之矣,風霆是也;有聲與形者,物有之矣,人獸是也;無聲與形者,物有之矣,鬼神是也。李石曰:「公子彭生托形於豕,晉文公托聲如牛,韓子謂鬼無聲與形,未盡也。」曰:然則有怪而與民物接者,何也?曰:是有二:有鬼,有物。有怪或作見怪,二下或有說字;或有說字,而無「有鬼有物」四字。漠然無形與聲者,鬼之常也。民有忤於天,有違於民,上民字一作人,下民字或作時。有爽於物,逆於倫,而感於氣,於是乎鬼有形於形,有形或作有托。有憑於聲以應之,而下殃禍焉,皆民之為之也。為下或無之字。其既也,又反乎其常。曰:何謂物?曰:成於形與聲者 ,土石、風霆、人獸是也;反乎無聲與形者,鬼神是也;反乎或作反其,非是。不能有形與聲,不能無形與聲者,物怪是也。或無「不能有形與聲」六字,或無「不能無形與聲」六字。故其作而接於民也無恆,故有動於民而為禍,亦有動於民而為福,本或先言為福。按《左氏》、《國語》:「周惠王十五年,有神降於莘。王問諸內史過,對曰云云。有得神以興,亦有以亡。夏之興也,祝融降於崇山;其亡也,回祿信於聆隧。商之興也,勹淮嗚山;其亡也,夷羊在牧。周之興也,蝶弄C於歧山;其衰,以杜伯射於於高阜。」動於民而為禍福,其斯之謂歟?亦有動於民而莫之為禍福,適丁民之有是時也。作《原鬼》。閣、蜀、粹無作字。今按:古書篇題多在後者,如《荀子》諸賦正此類也。但此篇前已有題,不應複出,故且從諸本存作字。

 

,欲初五接『財神』!!初六想《送窮》乎??

古來『知』、『行』二字,添上『難』、『易』判語,加了『先』、『後』助詞,不曉多少文章??當真『行道難』也!!

【行難】

行,下孟切。公《與祠部陸參員外書》,在貞元十八年。此篇言參自越州召拜祠部員外郎,豈在前歟?參字公佐云。

或問「行孰難?」曰:「舍我之矜,從爾之稱,孰能之。」曰:「陸先生參,何如?」按:《李習之集》,參作人參。曰:「先生之賢聞天下,是是而非非。聞下或有於字。貞元中,自越州徵拜祠部員外郎 ,京師之人日造焉,閉門而拒之滿街。愈嘗往間客席,嘗或作常。間或作問。客或作賓。席下或有坐定二字。先生矜語其客曰:『某胥也,某商也,其生某任之,其死某誄之,某與某可人也,可或作何。或從閣、杭、苑作可,云:「可人見《禮記》,鄭注曰:此人可也。」今按:據《禮記》是也。然詳下文韓公之語,似以陸公雖嘗任誄此人,複自疑於有罪,則頗有薄其門地之意。而以薦引之力自多者,恐須作何字,語勢乃協。更詳之。任與誄也非罪歟?』皆曰:『然。』也或作之。罪一作過。曰上或有應字。愈曰:『某之胥,某之商,其得任與誄也,有由乎?抑有罪不足任而誄之邪? 』任而誄或作誄而任。而或作與。先生曰:『否,吾惡其初,惡去聲。不然,任與誄也何尤。』愈曰:『苟如是,先生之言過矣!昔者管敬子取盜二人為大夫於公,《禮記》:「管仲遇盜,取二人焉,上以為公臣,曰:『其所與由闢也,可人也。』」敬子,仲之謚也。趙文子舉管庫之士七十有餘家,《禮記》:「趙文子所舉於晉國,管庫之士七十有餘家。」夫惡求其初?』惡音烏。先生曰:『不然,彼之取者賢也。』愈曰:『先生之所謂賢者,大賢歟?抑賢於人之賢歟?齊也,晉也,且有二與七十 ,而可謂今之天下無其人邪?而可上或有焉字,邪上或有也字。先生之選人也已詳。』先生曰:『然。』愈曰:『聖人不世出,賢人不時出 ,千百歲之間倘有焉;聖人賢人,人,或皆作之,或並有人之二字。世出或作世生,百歲或作百年。不幸而有出於胥商之族者,先生之說傳,吾不忍赤子之不得乳於其母也!』先生曰:『然。』乳於或無於字。他日又往坐焉。或無坐字。先生曰:『今之用人也不詳。位乎朝者,吾取某與某而已,在下者多於朝,凡吾與者若干人。』愈曰:『先生之與者,盡於此乎?其皆賢乎?抑猶有舉其多而缺其少乎?』或無「其皆賢乎」四字。缺或作沒。少或作細,或作一。少下或有者字。今按:此言人之才或不全備,姑舉其可取之多,而略其可棄之少也。先生曰:『固然,吾敢求其全。』其或作於。今按:作其語意為近,但陸公此句正不敢必求全才之意,而下文韓公又以太詳而不早責之,殊不可曉,當更考之。愈曰:『由宰相至百執事凡幾位?由一方至一州凡幾位?先生之得者,無乃不足充其位邪 ?其位下或有也字。不早圖之,一朝而舉焉。今雖詳,其後用也必粗。 』舉焉或作索之,詳下或有且微字,非是。粗,聰徂切。先生曰:『然。子之言,孟軻不如。』」《文錄》作「退語其人曰,乃今吾見孟軻」。

 

Michael Nielsen 先生致力於搬開『擋路石』耶!!??

In the last chapter we learned that deep neural networks are often much harder to train than shallow neural networks. That’s unfortunate, since we have good reason to believe that if we could train deep nets they’d be much more powerful than shallow nets. But while the news from the last chapter is discouraging, we won’t let it stop us. In this chapter, we’ll develop techniques which can be used to train deep networks, and apply them in practice. We’ll also look at the broader picture, briefly reviewing recent progress on using deep nets for image recognition, speech recognition, and other applications. And we’ll take a brief, speculative look at what the future may hold for neural nets, and for artificial intelligence.

The chapter is a long one. To help you navigate, let’s take a tour. The sections are only loosely coupled, so provided you have some basic familiarity with neural nets, you can jump to whatever most interests you.

The main part of the chapter is an introduction to one of the most widely used types of deep network: deep convolutional networks. We’ll work through a detailed example – code and all – of using convolutional nets to solve the problem of classifying handwritten digits from the MNIST data set:

We’ll start our account of convolutional networks with the shallow networks used to attack this problem earlier in the book. Through many iterations we’ll build up more and more powerful networks. As we go we’ll explore many powerful techniques: convolutions, pooling, the use of GPUs to do far more training than we did with our shallow networks, the algorithmic expansion of our training data (to reduce overfitting), the use of the dropout technique (also to reduce overfitting), the use of ensembles of networks, and others. The result will be a system that offers near-human performance. Of the 10,000 MNIST test images – images not seen during training! – our system will classify 9,967 correctly. Here’s a peek at the 33 images which are misclassified. Note that the correct classification is in the top right; our program’s classification is in the bottom right:

Many of these are tough even for a human to classify. Consider, for example, the third image in the top row. To me it looks more like a “9” than an “8”, which is the official classification. Our network also thinks it’s a “9”. This kind of “error” is at the very least understandable, and perhaps even commendable. We conclude our discussion of image recognition with a survey of some of the spectacular recent progress using networks (particularly convolutional nets) to do image recognition.

The remainder of the chapter discusses deep learning from a broader and less detailed perspective. We’ll briefly survey other models of neural networks, such as recurrent neural nets and long short-term memory units, and how such models can be applied to problems in speech recognition, natural language processing, and other areas. And we’ll speculate about the future of neural networks and deep learning, ranging from ideas like intention-driven user interfaces, to the role of deep learning in artificial intelligence.

The chapter builds on the earlier chapters in the book, making use of and integrating ideas such as backpropagation, regularization, the softmax function, and so on. However, to read the chapter you don’t need to have worked in detail through all the earlier chapters. It will, however, help to have read Chapter 1, on the basics of neural networks. When I use concepts from Chapters 2 to 5, I provide links so you can familiarize yourself, if necessary.

It’s worth noting what the chapter is not. It’s not a tutorial on the latest and greatest neural networks libraries. Nor are we going to be training deep networks with dozens of layers to solve problems at the very leading edge. Rather, the focus is on understanding some of the core principles behind deep neural networks, and applying them in the simple, easy-to-understand context of the MNIST problem. Put another way: the chapter is not going to bring you right up to the frontier. Rather, the intent of this and earlier chapters is to focus on fundamentals, and so to prepare you to understand a wide range of current work.

The chapter is currently in beta. I welcome notification of typos, bugs, minor errors, and major misconceptions. Please drop me a line at mn@michaelnielsen.org if you spot such an error.