Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 116
FreeSandal | 輕。鬆。學。部落客 | 第 203 頁

光的世界︰幾何光學一

假使說『幾何光學』描述『光線』在不同介質裡之傳播行為︰

Geometrical optics, or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an abstraction, or instrument, useful in approximating the paths along which light propagates in certain classes of circumstances.

The simplifying assumptions of geometrical optics include that light rays:

  • propagate in rectilinear paths as they travel in a homogeneous medium
  • bend, and in particular circumstances may split in two, at the interface between two dissimilar media
  • follow curved paths in a medium in which the refractive index changes
  • may be absorbed or reflected.

Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations.

───

 

首要得先明白什麼是『光線』呢?如果將之比擬為雷射光束

LASER

Red (660 & 635 nm), green (532 & 520 nm) and blue-violet (445 & 405 nm) lasers

 

或許差強人意。然而總有模模糊糊的味道。就像請人定義什麼是『質量』?若只是講『質量』就是『物質的量』,怕也難明所指的吧!此時要是援用牛頓的『第二運動定律』 \vec F = m  \cdot \vec a 就釐清了『質量』之明確意義為 m = \frac{\vec F}{\vec a} 也!

因此『幾何光學』之基本原理 ── 費馬原理 ──

Fermat’s principle

In optics, Fermat‘s principle or the principle of least time is the principle that the path taken between two points by a ray of light is the path that can be traversed in the least time. This principle is sometimes taken as the definition of a ray of light.[1] However, this version of the principle is not general; a more modern statement of the principle is that rays of light traverse the path of stationary optical length with respect to variations of the path.[2] In other words, a ray of light prefers the path such that there are other paths, arbitrarily nearby on either side, along which the ray would take almost exactly the same time to traverse.

Fermat’s principle can be used to describe the properties of light rays reflected off mirrors, refracted through different media, or undergoing total internal reflection. It follows mathematically from Huygens’ principle (at the limit of small wavelength). French mathematician Pierre de Fermat’s text Analyse des réfractions exploits the technique of adequality to derive Snell’s law of refraction[3] and the law of reflection.

Fermat’s principle has the same form as Hamilton’s principle and it is the basis of Hamiltonian optics.

641px-Snells_law.svg

Fermat’s principle leads to Snell’s law; when the sines of the angles in the different media are in the same proportion as the propagation velocities, the time to get from P to Q is minimized.

Modern version

The time T a point of the electromagnetic wave needs to cover a path between the points A and B is given by:

T=\int_{\mathbf{t_0}}^{\mathbf{t_1}} \, dt = \frac{1}{c} \int_{\mathbf{t_0}}^{\mathbf{t_1}} \frac{c}{v} \frac{ds}{dt}\, dt = \frac{1}{c} \int_{\mathbf{A}}^{\mathbf{B}} n\, ds\

c is the speed of light in vacuum, ds an infinitesimal displacement along the ray, v = ds/dt the speed of light in a medium and n = c/v the refractive index of that medium, t_{0} is the starting time (the wave front is in A), t_{1} is the arrival time at B. The optical path length of a ray from a point A to a point B is defined by:

  S=\int_{\mathbf{A}}^{\mathbf{B}} n\, ds\

and it is related to the travel time by S = cT. The optical path length is a purely geometrical quantity since time is not considered in its calculation. An extremum in the light travel time between two points A and B is equivalent to an extremum of the optical path length between those two points. The historical form proposed by Fermat is incomplete. A complete modern statement of the variational Fermat principle is that

the optical length of the path followed by light between two fixed points, A and B, is an extremum. The optical length is defined as the physical length multiplied by the refractive index of the material.”[4]

In the context of calculus of variations this can be written as

\delta S= \delta\int_{\mathbf{A}}^{\mathbf{B}} n \, ds =0

In general, the refractive index is a scalar field of position in space, that is, n=n\left(x_1,x_2,x_3\right) \ in 3D euclidean space. Assuming now that light has a component that travels along the x3 axis, the path of a light ray may be parametrized as s=\left(x_1\left(x_3\right),x_2\left(x_3\right),x_3\right) \ and

nds=n \frac{\sqrt{dx_1^2+dx_2^2+dx_3^2}}{dx_3}dx_3=n \sqrt{1+\dot{x}_1^2+\dot{x}_2^2} \ dx_3

where \dot{x}_k=dx_k/dx_3. The principle of Fermat can now be written as

\delta S= \delta\int_{x_{3A}}^{x_{3B}} n\left(x_1,x_2,x_3\right) \sqrt{1+\dot{x}_1^2+\dot{x}_2^2}\, dx_3
= \delta\int_{x_{3A}}^{x_{3B}} L\left(x_1\left(x_3\right),x_2\left(x_3\right),\dot{x}_1\left(x_3\right),\dot{x}_2\left(x_3\right),x_3\right)\, dx_3=0

which has the same form as Hamilton’s principle but in which x3 takes the role of time in classical mechanics. Function L\left(x_1,x_2,\dot{x}_1,\dot{x}_2,x_3\right) is the optical Lagrangian from which the Lagrangian and Hamiltonian (as in Hamiltonian mechanics) formulations of geometrical optics may be derived.[5]

───

 

也可以用來定義何謂『光線』的了!!此時再得『惠更斯原理』之光照,或可想像『光線』之行徑乎??

200px-Huygens_principle

EnvelopeAnim

220px-Plane_wave_wavefronts_3D.svg

Lens_and_wavefronts

200px-Refraction_-_Huygens-Fresnel_principle.svg

220px-Huygens_Refracted_Waves

200px-Refraction_on_an_aperture_-_Huygens-Fresnel_principle.svg

250px-FresnelDiff9_3_PEM

Circular_Aperture_Fresnel_Diffraction_high_res

惠更斯原理
在『波前』wavefront 上的每一個點都可以將它看成是產生『球面次波』Spherical secondary waves 的『點波源』,而在這『之後』任何時刻的『波前』則可看作是這一些『相同相位子波』的『包絡』Envlope 『』或者『』。

那麼什麼是『包絡』的呢?從幾何學上講,一個『曲線族』的『包絡線』與該曲線族中的每一條曲線都『相切』tangent to 於某一點。一七三四年法國數學家亞歷克西斯‧克勞德‧克萊羅 Alexis Claude de Clairault 提出 y(x)=x\frac{dy}{dx}+f\left(\frac{dy}{dx}\right) 方程式。如果將該方程式對變數 x 再次作『微分』 得到 0=\left(x+f'\left(\frac{dy}{dx}\right)\right)\frac{d^2 y}{dx^2},因此 0=\frac{d^2 y}{dx^2}  或者 0=x+f'\left(\frac{dy}{dx}\right)。假使 0=\frac{d^2 y}{dx^2},那麼 \frac{dy}{dx} = C 是一個『常數』,將之代入原方程式得到『曲線族y(x)=Cx+f(C) 的一般解。如果 0=x+f'\left(\frac{dy}{dx}\right),它的解是上述曲線族的『包絡線』。舉例而言,下圖是 f(p) = p^2 的圖示

120px-Solutions_to_Clairaut's_equation_where_f(t)=t^2

其次『波前』的形狀可以被經過的『光學系統』所改變;而『相位相同』是講在 t 時間的波前『次波』都經過了『相同』的『時距\Delta t,形成了 t^{'} = t + \Delta t 新的波前。

藉著這原理,惠更斯給出了波的『直線傳播』與『球面傳播』的『定性』解釋,並且推導出了『反射定律』與『折射定律』。但是他卻不能解釋,為什麼當光波遇到『銳邊』、『小孔』或『狹縫』時,會偏離了直線傳播,也就是說會發生『繞射』現象。除此之外『惠更斯原理』假設了『次波』只會朝著『行進方向』傳播;然而他並沒有解釋為什麼它們不可以朝反方向傳播的呢?

法國物理學家奧古斯丁‧菲涅耳 Augustin Fresnel ,是『波動光學理論』的主要創建者之一,在惠更斯原理的基礎上假設這些『次波會彼此發生干涉』 ,這就是現今所稱的『惠更斯‧菲涅耳原理』,是『惠更斯原理』與『干涉原理』的開花結果。一八一八年菲涅耳將他的論文提交給法蘭西學術院的評委會。評委會的會員西莫恩‧德尼‧帕松 Siméon Denis Poisson 認為假使菲涅耳的理論成立,那麼將光波照射於一小塊圓形擋板時,所形成的陰影之中央必定會有一個亮斑,因此他推斷這理論不正確。同時與會的弗朗索瓦‧讓‧多米尼克‧阿拉戈 François Jean Dominique Arago 親自動手做了這個實驗,結果與預測相符,證實了菲涅耳原理的正確無誤。這實驗是支持光波動說的強有力的證據,與楊氏的雙縫實驗共同反駁了牛頓主導的光粒子說。

─── 摘自《【Sonic π】聲波之傳播原理︰原理篇《二》

 

那個『波前』的『法向量』正標示『光線』之流向也!!

在此就讓我們借著

變分法

變分法是處理泛函數學領域,和處理函數的普通微積分相對。譬如,這樣的泛函可以通過未知函數的積分和它的導數來構造。變分法最終尋求的是極值函數:它們使得泛函取得極大或極小值。有些曲線上的經典問題採用這種形式表達:一個例子是最速降線,在重力作用下一個粒子沿著該路徑可以在最短時間從點A到達不直接在它底下的一點B。在所有從A到B的曲線中必須極小化代表下降時間的表達式。

變分法的關鍵定理是歐拉-拉格朗日方程。它對應於泛函的臨界點。在尋找函數的極大和極小值時,在一個解附近的微小變化的分析給出一階的一個近似。它不能分辨是找到了最大值或者最小值(或者都不是)。

變分法在理論物理中非常重要:在拉格朗日力學中,以及在最小作用量原理量子力學的應用中。變分法提供了有限元方法的數學基礎,它是求解邊界值問題的強力工具。它們也在材料學中研究材料平衡中大量使用。而在純數學中的例子有,黎曼調和函數中使用狄利克雷原理

同樣的材料可以出現在不同的標題中,例如希爾伯特空間技術,莫爾斯理論,或者辛幾何變分一詞用於所有極值泛函問題。微分幾何中的測地線的研究是很顯然的變分性質的領域。極小曲面肥皂泡)上也有很多研究工作,稱為普拉托問題

───

用『任意鄰近函式』 \delta x(t) = \epsilon \cdot \eta (t)的概念

Euler–Lagrange equation

Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.[Note 3]

Consider the functional

 J[y] = \int_{x_1}^{x_2} L(x,y(x),y'(x))\, dx \, .

where

x1, x2 are constants,
y (x) is twice continuously differentiable,
y ′(x) = dy / dx  ,
L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x,  y,  y.

If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x1 and x2 , then for any number ε close to 0,

J[f] \le J[f + \varepsilon \eta] \, .

The term εη is called the variation of the function f and is denoted by δf .[11]

Substituting  f + εη for y  in the functional J[ y ] , the result is a function of ε,

 \Phi(\varepsilon) = J[f+\varepsilon\eta] \, .

Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus,[Note 4]

 \Phi'(0) \equiv \left.\frac{d\Phi}{d\varepsilon}\right|_{\varepsilon = 0} = \int_{x_1}^{x_2} \left.\frac{dL}{d\varepsilon}\right|_{\varepsilon = 0} dx = 0 \, .

Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η are functions of ε but x is not,

 \frac{dL}{d\varepsilon}=\frac{\partial L}{\partial y}\frac{dy}{d\varepsilon} + \frac{\partial L}{\partial y'}\frac{dy'}{d\varepsilon}

and since  dy / = η  and  dy ′/ = η’ ,

 \frac{dL}{d\varepsilon}=\frac{\partial L}{\partial y}\eta + \frac{\partial L}{\partial y'}\eta' .

Therefore,

where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x1 and x2 by definition. Also, as previously mentioned the left side of the equation is zero so that

 \int_{x_1}^{x_2} \eta \left(\frac{\partial L}{\partial f} - \frac{d}{dx}\frac{\partial L}{\partial f'} \right) \, dx = 0 \, .

According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e.

 \frac{\part L}{\part f} -\frac{d}{dx} \frac{\part L}{\part f'}=0

which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) .

In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum.

─── 摘自《W!o+ 的《小伶鼬工坊演義》︰神經網絡【轉折點】四中

 

以『在均勻介質中,光走直線。』為例。對比

【紙筆計算】

Example

In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x1, y1) and (x2, y2) . The arc length of the curve is given by

A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,

with

y\,'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.

The Euler–Lagrange equation will now be used to find the extremal function f (x) that minimizes the functional A[y ] .

{\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0

with

L={\sqrt {1+[f'(x)]^{2}}}\,.

Since f does not appear explicitly in L , the first term in the Euler–Lagrange equation vanishes for all f (x) and thus,

{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.

Substituting for L and taking the partial derivative,

{\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.

Taking the derivative d/dx and simplifying gives,

{\frac {d^{2}f}{dx^{2}}}\ \cdot \ {\frac {1}{\left[{\sqrt {1+[f'(x)]^{2}}}\ \right]^{3}}}=0\,,

and because 1+[f ′(x)]2 is non-zero,

{\frac {d^{2}f}{dx^{2}}}=0\,,

which implies that the shortest curve that connects two points (x1, y1) and (x2, y2) is

f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}

and we have thus found the extremal function f(x) that minimizes the functional A[y] so that A[f] is a minimum. Note that y = f(x) is the equation for a straight line, in other words, the shortest distance between two points is a straight line.[Note 5]

───

 

【SymPy 符號運算】

pi@raspberrypi:~ $ python3
Python 3.4.2 (default, Oct 19 2014, 13:31:11) 
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from sympy import *
>>> init_printing()
>>> y = Function('y')
>>> x = Symbol('x')
>>> L = sqrt(1 + (y(x).diff(x))**2)
>>> L
     _________________
    ╱           2     
   ╱  ⎛d       ⎞      
  ╱   ⎜──(y(x))⎟  + 1 
╲╱    ⎝dx      ⎠      
>>> E = euler_equations(L, y(x),x)
>>> E
⎡ ⎛                2  ⎞               ⎤
⎢ ⎜      ⎛d       ⎞   ⎟               ⎥
⎢ ⎜      ⎜──(y(x))⎟   ⎟   2           ⎥
⎢ ⎜      ⎝dx      ⎠   ⎟  d            ⎥
⎢-⎜1 - ───────────────⎟⋅───(y(x))     ⎥
⎢ ⎜              2    ⎟   2           ⎥
⎢ ⎜    ⎛d       ⎞     ⎟ dx            ⎥
⎢ ⎜    ⎜──(y(x))⎟  + 1⎟               ⎥
⎢ ⎝    ⎝dx      ⎠     ⎠               ⎥
⎢───────────────────────────────── = 0⎥
⎢           _________________         ⎥
⎢          ╱           2              ⎥
⎢         ╱  ⎛d       ⎞               ⎥
⎢        ╱   ⎜──(y(x))⎟  + 1          ⎥
⎣      ╲╱    ⎝dx      ⎠               ⎦

>>> dsolve(y(x).diff(x,x))
y(x) = C₁ + C₂⋅x
>>> 

 

之差異。

 

 

 

 

 

 

 

 

 

 

光的世界︰動手作

明‧來知德一生窮經研史,積壘二十九年之力,著《易經集注》,圖表詳備,自成一家之言 。

來知德(1526年-1604年),字矣鮮,別號瞿塘,梁山人,嘉靖壬子舉人。提要云:「萬歷三十年,總督王象乾巡撫郭子章薦授翰林院待詔,知德以老疾辭詔,以所授官至任,事迹具《明史儒林傳》 。」

來知德因父母病而辭官未士,深居以自學易,六年完全一無所獲,後無師自通,《周易集註》歷 29 年始成。

其自序云:「德生去孔子二千餘年,且賦性愚劣,又居僻地,無人傳授。因父母病,侍養未仕,乃取易讀于釡山草堂,六年不能窺其毫髪。遂遠客萬縣求溪深 山之中,沈潛反復,忘寢忘食有年。思之思之,鬼神通之,數年而悟伏羲文王周公之象,又數年而悟文王序卦孔子雜卦,又數年而悟卦變之非。始于隆慶四年庚午, 終于萬厯二十六年戊戌,二十九年而後成書,正所謂困而知之也。」

《周易集註》專以錯綜(錯卦與綜卦)以及「中爻」(即互卦)論易象,並以兩錯卦為同一卦,以此而持論:上經只十八卦,下經亦只十八卦。自云:「是自孔子沒而易亡已至今日矣,四聖之易,如長夜者二千餘年,不其可長歎也哉。」

來知德此說後世頗多議論,提要曰:「上下經各十八卦,本稅與權之舊說。而所說中爻之象,亦卽漢以來互體之法。」並評其為「夜郎自大」。 

 

設若生於今日,各類軟件工具齊備,不知《易經集注》是何面貌乎 ??作者嘗用『PyDatalog』邏輯編程演繹伏羲、文王八卦之理︰

若問『伏羲氏』造八卦依據什麼呢?《說卦傳》講︰

天地定位,山澤通氣,雷風相薄,水火不相射,八卦相錯,數往者順,知來者逆,是故易逆數也。

首先那八卦的來歷是『三爻全變』的四對卦︰

全變(X卦, Y卦) <=

一爻變(X卦, Z卦) & 二爻變(Z卦, T卦) & 三爻變(T卦, Y卦) &

(X卦 != Z卦) & (X卦 != T卦) & (X卦 != Y卦) &

(Z卦 != T卦) & (Z卦 != Y卦) &

(T卦 != Y卦)

得到

[(坤, 乾), (巽, 震), (兌, 艮), (離, 坎), (震, 巽), (乾, 坤), (艮, 兌), (坎, 離)]

這『三』之義,或可由

老子》‧《 第四十二章

道生一,一生二,二生生萬物。
萬物負陰而抱陽,沖氣以為和。
人之所惡,唯孤、寡、不轂,而王公以為稱。
故物或損之而益,或益之而損。
人之所教,我亦教之:
強梁者不得其死,吾將以為教父。

的 『三生萬物』之『三』來詮釋。如是『三』就指『變之極』的了 。所以天地『定』空間之『位』。大地上『山』高『澤』低,彼此通氣,天空中風雲布雨。日往月來述說著時間,『水』『火』相反相互為用。屬於天的歸之於天, 屬於地的歸之於地。『雷震』一陽始,『風巽』一陰生,『初爻』分兩儀 ── 【陽︰震離兌乾】【陰︰巽坎艮坤】 ── ;『中爻』定四象 ── 【震離】少陰陽中陰,【兌乾】太陽陽中陽,【巽坎】少陽陰中陽,【艮坤】太陰陰中陰 。『上爻』決八卦,陽大陰小,『陽儀』陰前陽後,『陰儀』陽前陰後。順時而成過往,逆數則知將來,『是故易逆數也。』或將有所圖乎?!這正是

繫辭上傳

是 故,易有太極,是生兩儀,兩儀生四象,四象生八卦,八卦定吉凶,吉凶生大業。是故法象莫大乎天地,變通莫大乎四時,縣象著明莫大乎日月,崇高莫大乎富貴。 備物致用,立成器以為天下利,莫大乎聖人。探賾索隱,鉤深致遠,以定天下之吉凶,成天下之亹亹者,莫大乎蓍龜。是故天生神物,聖人則之,天地變化,聖人效 之。天垂象,見吉凶,聖人象之。河出圖,洛出書,聖人則之。易有四象,所以示也。繫辭焉,所以告也。定之以吉凶,所以斷也。易曰「自天祐之,吉,无不 利」。子曰:「祐者,助也。」天之所助者順也,人之所助者信也。履信思乎順,又以尚賢也。是以自天祐之,吉,无不利也。

趨吉避凶之道耶!!

伏羲八卦既已義理精當一氣貫通,

Houtian_Bagua 文王八卦

將有何說?然而伏羲乾坤裡,並不見『人』,『人』只是萬物之一而已!故而文王以

275px-Family_Ba_Gua  乾坤六子圖

,欲立『人』。或可由

老子》‧《第二十五章

有物混成,先天地生。
寂兮寥兮,獨立而不改,周行而不殆,可以爲天下母。
吾不知其名,字之曰道。
強爲之名,曰大。大曰逝,逝曰遠,遠曰反。
故道大,天大,地大,王亦大。域中有四大,而王居其一焉。
人法地,地法天,天法道,道法自然。

知其一二。由於『人』居處大地,以『食』為天,之所以用

帝 出乎震,齊乎巽,相見乎離,致役乎坤,說言乎兌,戰乎乾,勞乎坎,成言乎艮。萬物出乎震,震,東方也。齊乎巽,巽,東南也。齊也者,言萬物之絜齊也。離也 者,明也。萬物皆相見,南方之卦也。聖人南面而聽天下,嚮明而治,蓋取諸此也。坤也者,地也。萬物皆致養焉,故曰致役乎坤。兌,正秋也,萬物之所說也,故 曰說言乎兌。戰乎乾,乾,西北之卦也,言陰陽相薄也。坎者,水也,正北方之卦也,勞卦也,萬物之所歸也,故曰勞乎坎。艮,東北之卦也,萬物之所成終而所成 始也,故曰成言乎艮。

立八卦耶?儼然一幅『春耕夏耘秋收冬藏』圖!縱將此八卦歸之於『後天』,見得著『兩儀四象』的嗎??若無『陰陽』,那『人』之『太極』將如何『立』呢!!想『周文王』︰

商朝末年為西伯,故亦稱「伯昌」。任用太顛散宜生等能人,施行裕民政策,國力日盛,卻為紂所忌,囚之於羑里,囚禁期間,寫下周易一書。

何許人也,能寫《周易》,卻不知『八卦之理』,當真是奇也怪哉 ??特此姑妄言之,讀者請姑妄聽之。

天一地二,文王以『人法地』之道,故八卦之旨皆出於

【變貳爻】二十四個

[(坤, 巽), (乾, 坎), (兌, 坤), (巽, 坤), (巽, 離), (兌, 巽), (震, 乾), (艮, 乾),

(離, 坤), (乾, 震), (坎, 乾), (巽, 兌), (震, 坎), (離, 巽), (坤, 兌), (離, 兌),

(艮, 坎), (乾, 艮), (坤, 離), (兌, 離), (坎, 震), (艮, 震), (坎, 艮), (震, 艮)]

。如果將此關係『圖示』

文王八卦變貳爻

見 著『陰陽』兩儀了沒?果然『類聚群分』,男女授受不親的哩!『先天』以『四象』來調和『陰陽』,『後天』用『五行』去運化『剛柔』。故『震巽』皆木也,有 陰陽,『雷風恆』且『風雷益』或祈『風調雨順』乎?此『三爻全變』者也,或以為『生之初』實『難』耶?極『陽』之『動』,方能解『屯難』矣!!『兌乾』全 是金,亦分陰陽,『澤天夬』和『天澤履』或說『吉凶禍福』惟人自招,豈可『怨天尤人』的呢!用『一陽之微』講『起心動念』之『慎』道,否則『動則得咎』 也!!文王豈無說乎??『木金』方可成『器』,『水火』方便為『用』,在這個『後天』『器用人間 』如何『成始成終』的吧!!

── 摘自《勇闖新世界︰ 《 pyDatalog 》【專題】之物件導向設計 ‧四下

 

深知曾經紙筆推導之苦。之所以寫在前頭,希望學習者掌握工具,能夠動手作,這也是器物發明之重要目的也。舉例而言,固然可以手書光的『反射定律』推導

200px-Reflection_of_light

假設光線從 P 點出發經過 x 點,被反射到 Q 點。那麼這個『光徑函數D  \overline{Px} + \overline{xQ}

D=\sqrt{x^2 + a^2}+ \sqrt{b^2 + (l - x)^2}

根據費馬原理,光線會選擇光徑函數為極值的路徑。因此 \frac{dD}{dx} = 0,所以得到

\frac{dD}{dx} = \frac{ x}{\sqrt{x^2+a^2}}+\frac{-l+x}{\sqrt{b^2 + (l - x)^2}}=0

然而
\frac{ x}{\sqrt{x^2+a^2}}=\sin\theta_1
\frac{l-x}{\sqrt{b^2 + (l - x)^2}}=\sin\theta_2

故得 \sin\theta_1 = \sin\theta_2\theta_1 =\theta_2

這就是光的『反射定律』。

 

要是能用『SymPy』豈不更簡單耶!!

pi@raspberrypi:~ $ python3
Python 3.4.2 (default, Oct 19 2014, 13:31:11) 
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.

>>> from sympy import *

# 自動選用系統最佳列印呈現方式
>>> init_printing()

# 定義使用的『符號』
>>> x, a, b, l = symbols('x a b l')

# 符號表達式
>>> D = sqrt(x**2 + a**2) + sqrt(b**2 + (l - x)**2)
>>> D
   _________      _______________
  ╱  2    2      ╱  2          2 
╲╱  a  + x   + ╲╱  b  + (l - x)  

# 微分
>>> D.diff(x)
     x               -l + x      
──────────── + ──────────────────
   _________      _______________
  ╱  2    2      ╱  2          2 
╲╱  a  + x     ╲╱  b  + (l - x)  
>>> 

 

不過終究不宜長篇介紹『SymPy』用法,還請自行先讀

SymPy Tutorial

 

的了。

 

 

 

 

 

 

 

 

 

 

光的世界︰方便行

既然身處將入人工智慧、工業 4.0 的世界,為何不以電腦輔助學習 、軟體強化理解、程式探索未知耶?佛法古來行方便︰

維摩詰所說經卷上

姚秦三藏鳩摩羅什譯

方便品第二

爾 時毘耶離大城中有長者名維摩詰。已曾供養無量諸佛深植善本。得無生忍。辯才無礙。遊戲神通逮諸總持。獲無所畏降魔勞怨。入深法門善於智度。通 達方便大願成就。明了眾生心之所趣。又能分別諸根利鈍。久於佛道心已純淑決定大乘。諸有所作能善思量。住佛威儀心大如海。諸佛咨嗟弟子。釋梵世主所敬。欲 度人故以善方便居毘耶離。資財無量攝諸貧民。奉戒清淨攝諸毀禁。以忍調行攝諸恚怒。以大精進攝諸懈怠。一心禪寂攝諸亂意。以決定慧攝諸無智。雖為白衣奉持 沙門清淨律行。雖處居家不著三界。示有妻子常修梵行。現有眷屬常樂遠離。雖服寶飾而以相好嚴身。雖復飲食而以禪悅為味。若至博弈戲處輒以度人。受諸異道不 毀正信。雖明世典常樂佛法。一切見敬為供養中最。執持正法攝諸長幼。一切治生諧偶雖獲俗利不以喜悅。遊諸四衢饒益眾生。入治政法救護一切。入講論處導以大 乘。入諸學堂誘開童蒙。入諸婬舍示欲之過。入諸酒肆能立其志。若在長者長者中尊為說勝法。若在居士居士中尊斷其貪著。若在剎利剎利中尊教以忍辱。若在婆羅 門婆羅門中尊除其我慢。若在大臣大臣中尊教以正法。若在王子王子中尊示以忠孝。若在內官內官中尊化政宮女。若在庶民庶民中尊令興福力。若在梵天梵天中尊誨 以勝慧。若在帝釋帝釋中尊示現無常。若在護世護世中尊護諸眾生。長者維摩詰。以如是等無量方便饒益眾生。其以方便現身有疾。以其疾故。國王大臣長者居士婆 羅門等。及諸王子并餘官屬。無數千人皆往問疾。其往者。維摩詰因以身疾廣為說法。諸仁者。是身無常無強無力無堅。速朽之法不可信也。為苦為惱眾病所集。諸 仁者。如此身明智者所不怙。是身如聚沫不可撮摩。是身如泡不得久立。是身如炎從渴愛生。是身如芭蕉中無有堅。是身如幻從顛倒起。是身如夢為虛妄見。是身如 影從業緣現。是身如響屬諸因緣。是身如浮雲須臾變滅。是身如電念念不住。是身無主為如地。是身無我為如火。是身無壽為如風。是身無人為如水。是身不實四大 為家。是身為空離我我所。是身無知如草木瓦礫。是身無作風力所轉。是身不淨穢惡充滿。是身為虛偽。雖假以澡浴衣食必歸磨滅。是身為災百一病惱。是身如丘井 為老所逼。是身無定為要當死。是身如毒蛇如怨賊如空聚。陰界諸入所共合成。諸仁者。此可患厭當樂佛身。所以者何。佛身者即法身也。從無量功德智慧生。從戒 定慧解脫解脫知見生。從慈悲喜捨生。從布施持戒忍辱柔和勤行精進禪定解脫三昧多聞智慧諸波羅蜜生。從方便生。從六通生。從三明生。從三十七道品生。從止觀 生。從十力四無所畏十八不共法生。從斷一切不善法集一切善法生。從真實生。從不放逸生。從如是無量清淨法生如來身。諸仁者。欲得佛身斷一切眾生病者。當發 阿耨多羅三藐三菩提心。如是長者維摩詰。為諸問疾者如應說法。令無數千人皆發阿耨多羅三藐三菩提心。

 

。如是我聞,因此整裝蓄勢願見鯤鵬之變也。雖說萬事起頭難!!自由軟件早有無量光,豈不方便行??就從軟件安裝開始︰

# 派生程式庫
sudo apt-get install python-numpy python3-numpy
sudo apt-get install python-scipy python3-scipy
sudo apt-get install python-skimage python3-skimage
sudo pip install sympy
sudo pip3 install sympy

 

憑借 Sympy

About

SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.

───

 

強大符號運算能力,再次學習『費馬原理』

Wiki_slope_in_2d.svg

Tangent_function_animation

在說費馬原理之前,讓我們先談談一個函數 f(x) 如何求『極值』── 極大值、極小值或拐點 ── 的問題。假使我們把函數看成是『山巒起伏』,那麼從經驗上講『山頂』就是它比『週邊都高』的地方;而『山谷』就是它比『週邊都低』的位置,一般都會比較『平坦』。因此觀察一個函數之各個點的『斜率』就可以知道它的『起伏變化』,那一些『斜率為零』的『』,就可能是該函數那個『點附近區域』裡的『最大值』或者『最小值』,當然也可能是『凹凸改變處』,這時叫做『拐點』或者『反曲點』,它並不是那個區域裡的『極大極小值』。在數學裡一個函數『求斜率』的方法就是將該函數『微分\frac{df}{dx}

就像有人說的︰『費馬原理』也許不應該稱作『最短時間原理』,而是應該叫做『平穩時間原理』。因為事實上光線並非都是選擇這一條『時間最短』的路徑。比方說右圖的『半圓形鏡面』,光走得路徑 \overrightarrow{QO} \cdot \overrightarrow{OP} 是反射路徑中的『最大值』。而在 『\frac{1}{4}圓平面切合鏡面』圖中,光走得路徑 \overrightarrow{QO} \cdot \overrightarrow{OP} 卻是那一個所有『可能路徑』函數中的『拐點』。而這個『平穩』是說著所有可能路徑 \overrightarrow{QX} \cdot \overrightarrow{XP} 的『光徑函數』是『可微分的』這一件事。那麼這個『費馬原理』有什麼重要性嗎?它是『幾何光學』的基本原理,可以用來『推導』各類由『鏡面』與『透鏡』組合而成的『光學系統』的『成像原理』。『理解』種種『光學設備』── 望遠鏡、顯微鏡、放大鏡、近視眼鏡… ── 的『設計原理』。

125px-Reflection_for_Semicircular_Mirror_upright.svg
半圓形鏡面

125px-Reflection_for_Mixed_Shaped_Mirror.svg
\frac{1}{4}圓平面切合鏡面

也就是說『費馬原理』可以讓你『改造』樹莓派『攝像模組』的『眼睛』,讓它能夠『看的更遠』或者『顯現微物』,如此各類『攝影』將會更『得手應心』的了!!

光的『反射定律』推導

200px-Reflection_of_light

假設光線從 P 點出發經過 x 點,被反射到 Q 點。那麼這個『光徑函數D  \overline{Px} + \overline{xQ}

D=\sqrt{x^2 + a^2}+ \sqrt{b^2 + (l - x)^2}

根據費馬原理,光線會選擇光徑函數為極值的路徑。因此 \frac{dD}{dx} = 0,所以得到

\frac{dD}{dx} = \frac{ x}{\sqrt{x^2+a^2}}+\frac{-l+x}{\sqrt{b^2 + (l - x)^2}}=0

然而
\frac{ x}{\sqrt{x^2+a^2}}=\sin\theta_1
\frac{l-x}{\sqrt{b^2 + (l - x)^2}}=\sin\theta_2

故得 \sin\theta_1 = \sin\theta_2\theta_1 =\theta_2

這就是光的『反射定律』。

光的『折射定律』推導

250px-Snellslaw_diagram_B

假設光線從左方『折射率』為 n_1 的介質一的 Q 點,經過 O 點到達『折射率』為 n_2 的介質二之 P 點。由『光速』的『費馬假設』我們可以得到這兩個介質中的光速為

v_1 = \frac {c}{n_1}
v_2 = \frac {c}{n_2}

,式中 c 是『真空光速』。那麼這條光線沿著 POQ 走,所需要的時間『光時函數T  \overline{PO} + \overline{OQ}

T=\frac{\sqrt{x^2 + a^2}}{v_1} + \frac{\sqrt{b^2 + (l - x)^2}}{v_2}

根據費馬原理,光線會選擇所需時間為最短的路徑,於是 \frac {dT}{dx} = 0,所以得到

\frac{dT}{dx}=\frac{x}{v_1\sqrt{x^2 + a^2}} + \frac{ - (l - x)}{v_2\sqrt{(l-x)^2 + b^2}}=0

然而
\frac{x}{\sqrt{x^2 + a^2}} =\sin\theta_1

\frac{ - (l - x)}{\sqrt{(l-x)^2 + b^2}}=\sin\theta_2

故得 n_1\sin\theta_1=n_2\sin\theta_2

這就是司乃耳『折射定律』。

─── 摘自《【Sonic π】聲波之傳播原理︰原理篇《一》

 

深入認識

幾何光學

幾何光學,乃利用幾何學研究光學的學術方法。幾何光學有幾個基本原理[1]

  • 在均勻介質中,光線直線傳播
  • 光的反射定律
  • 光的折射定律
  • 光程可逆性原理

由於光本身就是從原子、分子內發出的高頻電磁場,因此上述原理都可以通過電動力學中的電磁場理論導出。

Virtualimageframerate1

Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object.

───

 

之底蘊。充分掌握使用樹莓派作

Ray transfer matrix analysis

Ray transfer matrix analysis (also known as ABCD matrix analysis) is a type of ray tracing technique used in the design of some optical systems, particularly lasers. It involves the construction of a ray transfer matrix which describes the optical system; tracing of a light path through the system can then be performed by multiplying this matrix with a vector representing the light ray. The same analysis is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see Beam optics.

The technique that is described below uses the paraxial approximation of ray optics, which means that all rays are assumed to be at a small angle (θ in radians) and a small distance (x) relative to the optical axis of the system.[1]
───

Sympy Physics Module Gaussian Optics

Gaussian optics.

The module implements:

  • Ray transfer matrices for geometrical and gaussian optics.

    See RayTransferMatrix, GeometricRay and BeamParameter

  • Conjugation relations for geometrical and gaussian optics.

    See geometric_conj*, gauss_conj and conjugate_gauss_beams

The conventions for the distances are as follows:

focal distance
positive for convergent lenses
object distance
positive for real objects
image distance
positive for real images

───

 

的計算。

 

 

 

 

 

 

 

 

 

 

 

 

 

光的世界︰引言

日日》唐‧李商隱

日日春光鬥日光,
山城斜路杏花香。
幾時心緒渾無事,
得及游絲百尺長。

日日神經網絡相傍,遲早難免精神緊張,何不回眸來個轉向,欣賞成景出色日光。這光來歷身份非常,能使黑暗無處潛藏,傳聞開天闢地的頭一日,神說、要有光、就有了光。

聖經 (和合本)/創世記

第一章

1起初 神創造天地。
2地是空虛混沌.淵面黑暗. 神的靈運行在水面上。
3 神說、要有光、就有了光。
4 神看光是好的、就把光暗分開了。
5 神稱光爲晝、稱暗爲夜.有晚上、有早晨、這是頭一日。
6 神說、諸水之間要有空氣、將水分爲上下。
7 神就造出空氣、將空氣以下的水、空氣以上的水分開了.事就這樣成了。
8 神稱空氣爲天.有晚上、有早晨、是第二日。
9 神說、天下的水要聚在一處、使旱地露出來.事就這樣成了。
10 神稱旱地爲地、稱水的聚處爲海. 神看着是好的。
11 神說、地要發生青草、和結種子的菜蔬、並結果子的樹木、各從其類、果子都包着核.事就這樣成了。
12於是地發生了青草、和結種子的菜蔬、各從其類、並結果子的樹木、各從其類、果子都包着核。 神看着是好的.
13有晚上、有早晨、是第三日。
14 神說、天上要有光體、可以分晝夜、作記號、定節令、日子、年歲.
15並要發光在天空、普照在地上.事就這樣成了。
16於是 神造了兩個大光、大的管晝、小的管夜.又造衆星。
17就把這些光擺列在天空、普照在地上、
18管理晝夜、分別明暗. 神看着是好的.
19有晚上、有早晨、是第四日。
20 神說、水要多多滋生有生命的物.要有雀鳥飛在地面以上、天空之中。
21 神就造出大魚、和水中所滋生各樣有生命的動物、各從其類.又造出各樣飛鳥、各從其類. 神看着是好的。
22 神就賜福給這一切、說、滋生繁多、充滿海中的水.雀鳥也要多生在地上。
23有晚上、有早晨、是第五日。
24 神說、地要生出活物來、各從其類.牲畜、昆蟲、野獸、各從其類.事就這樣成了。
25於是 神造出野獸、各從其類.牲畜、各從其類.地上一切昆蟲、各從其類. 神看着是好的。
26 神說、我們要照着我們的形像、按着我們的樣式造人、使他們管理海裏的魚、空中的鳥、地上的牲畜、和全地、並地上所爬的一切昆蟲。
27 神就照着自己的形像造人、乃是照着他的形像造男造女。
28 神就賜福給他們.又對他們說、要生養衆多、遍滿地面、治理這地.也要管理海裏的魚、空中的鳥.和地上各樣行動的活物。
29 神說、看哪、我將遍地上一切結種子的菜蔬、和一切樹上所結有核的果子、全賜給你們作食物。
30至於地上的走獸、和空中的飛鳥、並各樣爬在地上有生命的物、我將青草賜給他們作食物.事就這樣成了。
31 神看着一切所造的都甚好.有晚上、有早晨、是第六日。

此光先天原自純淨︰

Light

Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers to visible light, which is visible to the human eye and is responsible for the sense of sight.[1] Visible light is usually defined as having wavelengths in the range of 400–700 nanometres (nm), or 4.00 × 10−7 to 7.00 × 10−7 m, between the infrared (with longer wavelengths) and the ultraviolet (with shorter wavelengths).[2][3] This wavelength means a frequency range of roughly 430–750 terahertz (THz).

The main source of light on Earth is the Sun. Sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the energy used by living things. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence. For example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey.

The primary properties of visible light are intensity, propagation direction, frequency or wavelength spectrum, and polarization, while its speed in a vacuum, 299,792,458 metres per second, is one of the fundamental constants of nature. Visible light, as with all types of electromagnetic radiation (EMR), is experimentally found to always move at this speed in a vacuum.[citation needed]

In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not.[4][5] In this sense, gamma rays, X-rays, microwaves and radio waves are also light. Like all types of light, visible light is emitted and absorbed in tiny “packets” called photons and exhibits properties of both waves and particles. This property is referred to as the wave–particle duality. The study of light, known as optics, is an important research area in modern physics.

Light_dispersion_conceptual_waves350px

A triangular prism dispersing a beam of white light. The longer wavelengths (red) and the shorter wavelengths (blue) get separated.

───

熟料卻因七彩炫目議論頻生︰

光的『本性』到底是什麼?一個光的『折射問題』就曾經在科學史上引發過大『論戰』!追根溯源西元二世紀時古希臘托勒密 Claudius Ptolemy 在所著之《光學》Optics 第五卷裡,提出了他的折射實驗與定律。也許在那個時代,並不清楚『正弦』Sin 的概念,所以他結論並不正確。其後於九八四年伊朗學者伊本‧沙爾 Ibn Sahl 在《論點火鏡子與透鏡》On Burning Mirrors and Lenses 裡最早正確地描述了『折射定律』,並且將之應用於找出能夠讓『光聚焦』而又不會產生『幾何像差』之『透鏡』的形狀。然而只可惜他的研究結果並未為其它的學者所注意到。因此往後的許多年,人們又再次的從托勒密的『錯誤理論』開始『研究折射』。到了十一世紀初阿拉伯的學者海什木 Al Hazen 『重新再做』托勒密的實驗,雖然在著作的《光學書》Book of Optics 中總結出了一些法則,卻也沒能夠得出折射的『正弦定律』。如此又過了五百年,一六零二年英國天文學家托馬斯‧哈里奧特 Thomas Harriot 重新發現了『折射定律』,不過他並沒有發表這個結果,只是在與德國天文學家約翰內斯‧克卜勒 Johannes Kepler 通信中曾提及過這件事。其後於一六二一年荷蘭天文學家威理博‧司乃耳 Willebrord Snellius  推導出了一個數學上的『等同形式』,然而在其有生之年,人們並不知道他的成就。作者雖然不知這些偉大的『天文學家們』為何當時『人不知』他們的『研究結論』?然而設想從事『竹藤工藝』者,假使不知道『如何彎曲』那個『竹片』與『藤條』的話,大概想做『什麼家具』都可能是困難的吧!那麼如果不知道如何『屈折光線』,一個天文學家又怎麽能夠製造『好的透鏡』,用以『觀測天象』的呢??

一六三七年法國的大哲學家勒內‧笛卡兒 René Descartes 在其專著《屈光學》Dioptrics 裡,推導出了這個『折射定律』,並且用自己的理論解析了一系列的『光學問題』。在這推導裡,他做了兩個『假設』︰一、『』的『傳播速度』與周遭的『介質密度』成『正比』;二、『光的速度』沿著『交界面』方向的『速度分量守恆』。一六六二年法國律師和業餘數學家皮埃爾‧德‧費馬 Pierre de Fermat 發表了『最短時間原理』:光線傳播的路徑是需時最少的路徑。藉此推導出了『折射定律』,但是該原理假設了與笛卡兒相反之『光的傳播速度與介質密度成反比』,為此費馬強烈的反駁笛卡兒的導引,認為笛卡爾的假設是錯誤的。根據歷史學者以撒‧福雪斯 Issac Vossius 一六六二年在著作《De natura lucis et proprietate》裏的敘述,笛卡兒事先閱讀了司乃耳的論文,然後調製出自己的導引。有些歷史學者覺得這指控太過誇張,令人難以置信;也有很多歷史學者都存疑過曾經發生了這回事,然而費馬與惠更斯卻分別多次重複地譴責笛卡兒之行為缺失。在此不論歷史上的『是非對錯』,這場光的『粒子說』與『波動說』之大戰正方興未艾!一六六四年英國博物學家羅伯特‧虎克 Robert Hooke  開始提倡光的『波動說』。但是一六六九年被授予劍橋大學三一學院盧卡斯數學教授席位的牛頓卻是笛卡兒的『光粒子說』之發揚者。一六七零年到一六七二年期間,牛頓負責在校講授光學。他研究了光的折射,發表『三稜鏡』可以將白光發散成彩色光譜,而且藉著透鏡和另一個三稜鏡可以將彩色光譜合組為白光。雖然虎克本人曾經公開批評牛頓的光微粒說。但是因為牛頓在多門物理領域的成就 ,使得他被公認是這場『光本性爭論』的贏家。

一六七八年荷蘭物理、天文和數學家克里斯蒂安‧惠更斯 Christiaan Huygens 依據虎克的提議,在其著作《光論》(Traité de la Lumiere)裡應用他創造的『子波原理』 ── 今天的惠更斯原理──,從光的波動性質,成功的推導出並且解釋了司乃耳定律。之後於一七零三年惠更斯在其著作《Dioptrica》中又談到了這定律,並且正式的將這定律的發現歸功於司乃耳。一八零二年英國的科學家與醫生托馬斯‧楊 Thomas Young ── 被譽為『世界上最後一個什麼都知道的人』 ── 做實驗發現,當光波從較低密度介質傳播到較高密度介質時,光波的波長會變短,他因此歸結出光波的傳播速度會降低。楊氏之所以大名鼎鼎在於他所提出的『雙縫實驗』 double-slit experiment 就是這一場『古典光本性大戰』勝負之最終『判定性實驗』。之後這個『光本性問題』又發生了量子力學史上的『愛因斯坦‧波耳』大戰,以及波耳所提出的『波、粒互補性原理』。

─── 摘自《【Sonic π】聲波之傳播原理︰原理篇《一》

 

怎不就趁暑休陽豔之際,前往光的世界,藉著程式、相機、透鏡、樹莓派,親眼目睹驗證神奇!!探究光的簡單行徑之理,竟能譜出曼妙動人之姿,果然耶量大自然生美,隨機虹霓成象??

 

 

 

 

 

 

 

 

 

 

 

 

 

W!o+ 的《小伶鼬工坊演義》︰神經網絡【深度學習】八

誰家附錄 Appendix 大哉辯! Michael Nielsen 先生欲罷不能也!!直想學習者掌未來天工開物耶??

天工開物》初刊於1637年(明崇禎十年)。是中國古代一部綜合性的科學技術著作,有人也稱它是一部百科全書式的著作,作者是明朝科學家宋應星

230px-Making_Paper

《天工開物》製紙的流程圖

天工開物/自序

天覆地載,物數號萬,而事亦因之曲成而不遺。豈人力也哉?

事物而既萬矣,必待口授目成而後識之,其與幾何?萬事萬物之中,其無益生人與有益者,各載其半。

世有聰明博物者,稠人推焉.乃棗梨之花未賞,而臆度「楚萍」;釜鬻之範鮮經,而侈談「莒鼎」;畫工好圖鬼魅而惡犬馬,即鄭僑 、晉華,豈足為烈哉?

幸生聖明極盛之世,滇南車馬,縱貫遼陽,嶺徼宦商衡遊薊北。為方萬里中,何事何物不可見見聞聞?若為士而生東晉之初、南宋之季,共視燕、秦、晉、豫 方物,已成夷產,從互市而得裘帽,何殊肅慎之矢也。且夫王孫帝子生長深宮,御廚玉粒正香,而欲觀耒耜 ,尚宮錦衣方剪,而想像機絲。當斯時也,披圖一觀,如 獲重寶矣 !

年來著書一種,名曰《天工開物》卷。傷哉貧也,欲購奇考證,而乏洛下之資;欲招致同人,商略贋真,而缺陳思之館。隨其孤陋見聞,藏諸方寸而寫之,豈 有當哉?吾友凃佰聚先生,誠意動天,心靈格物.凡古今一言之嘉,寸長可取,必勤勤懇懇而契合焉,昨歲《畫音歸正》,由先生而授梓.茲有後命,複取此卷而繼 起為之,其亦夙緣之所召哉!卷分前後,乃「貴五穀而賤金玉」之義。《觀象》、《樂律》二卷,其道太精,自揣非吾事,故臨梓刪去。

丐大業文人,棄擲案頭,此書干功名進取毫不相關也。

時 祟禎丁丑孟夏月,奉新宋應星書於(家食之問堂)

───

 

In this book, we’ve focused on the nuts and bolts of neural networks: how they work, and how they can be used to solve pattern recognition problems. This is material with many immediate practical applications. But, of course, one reason for interest in neural nets is the hope that one day they will go far beyond such basic pattern recognition problems. Perhaps they, or some other approach based on digital computers, will eventually be used to build thinking machines, machines that match or surpass human intelligence? This notion far exceeds the material discussed in the book – or what anyone in the world knows how to do. But it’s fun to speculate.

There has been much debate about whether it’s even possible for computers to match human intelligence. I’m not going to engage with that question. Despite ongoing dispute, I believe it’s not in serious doubt that an intelligent computer is possible – although it may be extremely complicated, and perhaps far beyond current technology – and current naysayers will one day seem much like the vitalists.

Rather, the question I explore here is whether there is a simple set of principles which can be used to explain intelligence? In particular, and more concretely, is there a simple algorithm for intelligence?

The idea that there is a truly simple algorithm for intelligence is a bold idea. It perhaps sounds too optimistic to be true. Many people have a strong intuitive sense that intelligence has considerable irreducible complexity. They’re so impressed by the amazing variety and flexibility of human thought that they conclude that a simple algorithm for intelligence must be impossible. Despite this intuition, I don’t think it’s wise to rush to judgement. The history of science is filled with instances where a phenomenon initially appeared extremely complex, but was later explained by some simple but powerful set of ideas.

Consider, for example, the early days of astronomy. Humans have known since ancient times that there is a menagerie of objects in the sky: the sun, the moon, the planets, the comets, and the stars. These objects behave in very different ways – stars move in a stately, regular way across the sky, for example, while comets appear as if out of nowhere, streak across the sky, and then disappear. In the 16th century only a foolish optimist could have imagined that all these objects’ motions could be explained by a simple set of principles. But in the 17th century Newton formulated his theory of universal gravitation, which not only explained all these motions, but also explained terrestrial phenomena such as the tides and the behaviour of Earth-bound projecticles. The 16th century’s foolish optimist seems in retrospect like a pessimist, asking for too little.

Of course, science contains many more such examples. Consider the myriad chemical substances making up our world, so beautifully explained by Mendeleev’s periodic table, which is, in turn, explained by a few simple rules which may be obtained from quantum mechanics. Or the puzzle of how there is so much complexity and diversity in the biological world, whose origin turns out to lie in the principle of evolution by natural selection. These and many other examples suggest that it would not be wise to rule out a simple explanation of intelligence merely on the grounds that what our brains – currently the best examples of intelligence – are doing appears to be very complicated*

*Through this appendix I assume that for a computer to be considered intelligent its capabilities must match or exceed human thinking ability. And so I’ll regard the question “Is there a simple algorithm for intelligence?” as equivalent to “Is there a simple algorithm which can `think’ along essentially the same lines as the human brain?” It’s worth noting, however, that there may well be forms of intelligence that don’t subsume human thought, but nonetheless go beyond it in interesting ways..

Contrariwise, and despite these optimistic examples, it is also logically possible that intelligence can only be explained by a large number of fundamentally distinct mechanisms. In the case of our brains, those many mechanisms may perhaps have evolved in response to many different selection pressures in our species’ evolutionary history. If this point of view is correct, then intelligence involves considerable irreducible complexity, and no simple algorithm for intelligence is possible.

Which of these two points of view is correct?

To get insight into this question, let’s ask a closely related question, which is whether there’s a simple explanation of how human brains work. In particular, let’s look at some ways of quantifying the complexity of the brain. Our first approach is the view of the brain from connectomics. This is all about the raw wiring: how many neurons there are in the brain, how many glial cells, and how many connections there are between the neurons. You’ve probably heard the numbers before – the brain contains on the order of 100 billion neurons, 100 billion glial cells, and 100 trillion connections between neurons. Those numbers are staggering. They’re also intimidating. If we need to understand the details of all those connections (not to mention the neurons and glial cells) in order to understand how the brain works, then we’re certainly not going to end up with a simple algorithm for intelligence.

There’s a second, more optimistic point of view, the view of the brain from molecular biology. The idea is to ask how much genetic information is needed to describe the brain’s architecture. To get a handle on this question, we’ll start by considering the genetic differences between humans and chimpanzees. You’ve probably heard the sound bite that “human beings are 98 percent chimpanzee”. This saying is sometimes varied – popular variations also give the number as 95 or 99 percent. The variations occur because the numbers were originally estimated by comparing samples of the human and chimp genomes, not the entire genomes. However, in 2007 the entire chimpanzee genome was sequenced (see also here), and we now know that human and chimp DNA differ at roughly 125 million DNA base pairs. That’s out of a total of roughly 3 billion DNA base pairs in each genome. So it’s not right to say human beings are 98 percent chimpanzee – we’re more like 96 percent chimpanzee.

How much information is in that 125 million base pairs? Each base pair can be labelled by one of four possibilities – the “letters” of the genetic code, the bases adenine, cytosine, guanine, and thymine. So each base pair can be described using two bits of information – just enough information to specify one of the four labels. So 125 million base pairs is equivalent to 250 million bits of information. That’s the genetic difference between humans and chimps!

Of course, that 250 million bits accounts for all the genetic differences between humans and chimps. We’re only interested in the difference associated to the brain. Unfortunately, no-one knows what fraction of the total genetic difference is needed to explain the difference between the brains. But let’s assume for the sake of argument that about half that 250 million bits accounts for the brain differences. That’s a total of 125 million bits.

125 million bits is an impressively large number. Let’s get a sense for how large it is by translating it into more human terms. In particular, how much would be an equivalent amount of English text? It turns out that the information content of English text is about 1 bit per letter. That sounds low – after all, the alphabet has 26 letters – but there is a tremendous amount of redundancy in English text. Of course, you might argue that our genomes are redundant, too, so two bits per base pair is an overestimate. But we’ll ignore that, since at worst it means that we’re overestimating our brain’s genetic complexity. With these assumptions, we see that the genetic difference between our brains and chimp brains is equivalent to about 125 million letters, or about 25 million English words. That’s about 30 times as much as the King James Bible.

That’s a lot of information. But it’s not incomprehensibly large. It’s on a human scale. Maybe no single human could ever understand all that’s written in that code, but a group of people could perhaps understand it collectively, through appropriate specialization. And although it’s a lot of information, it’s minuscule when compared to the information required to describe the 100 billion neurons, 100 billion glial cells, and 100 trillion connections in our brains. Even if we use a simple, coarse description – say, 10 floating point numbers to characterize each connection – that would require about 70 quadrillion bits. That means the genetic description is a factor of about half a billion less complex than the full connectome for the human brain.

What we learn from this is that our genome cannot possibly contain a detailed description of all our neural connections. Rather, it must specify just the broad architecture and basic principles underlying the brain. But that architecture and those principles seem to be enough to guarantee that we humans will grow up to be intelligent. Of course, there are caveats – growing children need a healthy, stimulating environment and good nutrition to achieve their intellectual potential. But provided we grow up in a reasonable environment, a healthy human will have remarkable intelligence. In some sense, the information in our genes contains the essence of how we think. And furthermore, the principles contained in that genetic information seem likely to be within our ability to collectively grasp.

All the numbers above are very rough estimates. It’s possible that 125 million bits is a tremendous overestimate, that there is some much more compact set of core principles underlying human thought. Maybe most of that 125 million bits is just fine-tuning of relatively minor details. Or maybe we were overly conservative in how we computed the numbers. Obviously, that’d be great if it were true! For our current purposes, the key point is this: the architecture of the brain is complicated, but it’s not nearly as complicated as you might think based on the number of connections in the brain. The view of the brain from molecular biology suggests we humans ought to one day be able to understand the basic principles behind the brain’s architecture.

In the last few paragraphs I’ve ignored the fact that that 125 million bits merely quantifies the genetic difference between human and chimp brains. Not all our brain function is due to those 125 million bits. Chimps are remarkable thinkers in their own right. Maybe the key to intelligence lies mostly in the mental abilities (and genetic information) that chimps and humans have in common. If this is correct, then human brains might be just a minor upgrade to chimpanzee brains, at least in terms of the complexity of the underlying principles. Despite the conventional human chauvinism about our unique capabilities, this isn’t inconceivable: the chimpanzee and human genetic lines diverged just 5 million years ago, a blink in evolutionary timescales. However, in the absence of a more compelling argument, I’m sympathetic to the conventional human chauvinism: my guess is that the most interesting principles underlying human thought lie in that 125 million bits, not in the part of the genome we share with chimpanzees.

Adopting the view of the brain from molecular biology gave us a reduction of roughly nine orders of magnitude in the complexity of our description. While encouraging, it doesn’t tell us whether or not a truly simple algorithm for intelligence is possible. Can we get any further reductions in complexity? And, more to the point, can we settle the question of whether a simple algorithm for intelligence is possible?

Unfortunately, there isn’t yet any evidence strong enough to decisively settle this question. Let me describe some of the available evidence, with the caveat that this is a very brief and incomplete overview, meant to convey the flavour of some recent work, not to comprehensively survey what is known.

Among the evidence suggesting that there may be a simple algorithm for intelligence is an experiment reported in April 2000 in the journal Nature. A team of scientists led by Mriganka Sur “rewired” the brains of newborn ferrets. Usually, the signal from a ferret’s eyes is transmitted to a part of the brain known as the visual cortex. But for these ferrets the scientists took the signal from the eyes and rerouted it so it instead went to the auditory cortex, i.e, the brain region that’s usually used for hearing.

To understand what happened when they did this, we need to know a bit about the visual cortex. The visual cortex contains many orientation columns. These are little slabs of neurons, each of which responds to visual stimuli from some particular direction. You can think of the orientation columns as tiny directional sensors: when someone shines a bright light from some particular direction, a corresponding orientation column is activated. If the light is moved, a different orientation column is activated. One of the most important high-level structures in the visual cortex is the orientation map, which charts how the orientation columns are laid out.

What the scientists found is that when the visual signal from the ferrets’ eyes was rerouted to the auditory cortex, the auditory cortex changed. Orientation columns and an orientation map began to emerge in the auditory cortex. It was more disorderly than the orientation map usually found in the visual cortex, but unmistakably similar. Furthermore, the scientists did some simple tests of how the ferrets responded to visual stimuli, training them to respond differently when lights flashed from different directions. These tests suggested that the ferrets could still learn to “see”, at least in a rudimentary fashion, using the auditory cortex.

This is an astonishing result. It suggests that there are common principles underlying how different parts of the brain learn to respond to sensory data. That commonality provides at least some support for the idea that there is a set of simple principles underlying intelligence. However, we shouldn’t kid ourselves about how good the ferrets’ vision was in these experiments. The behavioural tests tested only very gross aspects of vision. And, of course, we can’t ask the ferrets if they’ve “learned to see”. So the experiments don’t prove that the rewired auditory cortex was giving the ferrets a high-fidelity visual experience. And so they provide only limited evidence in favour of the idea that common principles underlie how different parts of the brain learn.

What evidence is there against the idea of a simple algorithm for intelligence? Some evidence comes from the fields of evolutionary psychology and neuroanatomy. Since the 1960s evolutionary psychologists have discovered a wide range of human universals, complex behaviours common to all humans, across cultures and upbringing. These human universals include the incest taboo between mother and son, the use of music and dance, as well as much complex linguistic structure, such as the use of swear words (i.e., taboo words), pronouns, and even structures as basic as the verb. Complementing these results, a great deal of evidence from neuroanatomy shows that many human behaviours are controlled by particular localized areas of the brain, and those areas seem to be similar in all people. Taken together, these findings suggest that many very specialized behaviours are hardwired into particular parts of our brains.

Some people conclude from these results that separate explanations must be required for these many brain functions, and that as a consequence there is an irreducible complexity to the brain’s function, a complexity that makes a simple explanation for the brain’s operation (and, perhaps, a simple algorithm for intelligence) impossible. For example, one well-known artificial intelligence researcher with this point of view is Marvin Minsky. In the 1970s and 1980s Minsky developed his “Society of Mind” theory, based on the idea that human intelligence is the result of a large society of individually simple (but very different) computational processes which Minsky calls agents. In his book describing the theory, Minsky sums up what he sees as the power of this point of view:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.

In a response*

*In “Contemplating Minds: A Forum for Artificial Intelligence”, edited by William J. Clancey, Stephen W. Smoliar, and Mark Stefik (MIT Press, 1994).

to reviews of his book, Minsky elaborated on the motivation for the Society of Mind, giving an argument similar to that stated above, based on neuroanatomy and evolutionary psychology:

We now know that the brain itself is composed of hundreds of different regions and nuclei, each with significantly different architectural elements and arrangements, and that many of them are involved with demonstrably different aspects of our mental activities. This modern mass of knowledge shows that many phenomena traditionally described by commonsense terms like “intelligence” or “understanding” actually involve complex assemblies of machinery.

Minsky is, of course, not the only person to hold a point of view along these lines; I’m merely giving him as an example of a supporter of this line of argument. I find the argument interesting, but don’t believe the evidence is compelling. While it’s true that the brain is composed of a large number of different regions, with different functions, it does not therefore follow that a simple explanation for the brain’s function is impossible. Perhaps those architectural differences arise out of common underlying principles, much as the motion of comets, the planets, the sun and the stars all arise from a single gravitational force. Neither Minsky nor anyone else has argued convincingly against such underlying principles.

My own prejudice is in favour of there being a simple algorithm for intelligence. And the main reason I like the idea, above and beyond the (inconclusive) arguments above, is that it’s an optimistic idea. When it comes to research, an unjustified optimism is often more productive than a seemingly better justified pessimism, for an optimist has the courage to set out and try new things. That’s the path to discovery, even if what is discovered is perhaps not what was originally hoped. A pessimist may be more “correct” in some narrow sense, but will discover less than the optimist.

This point of view is in stark contrast to the way we usually judge ideas: by attempting to figure out whether they are right or wrong. That’s a sensible strategy for dealing with the routine minutiae of day-to-day research. But it can be the wrong way of judging a big, bold idea, the sort of idea that defines an entire research program. Sometimes, we have only weak evidence about whether such an idea is correct or not. We can meekly refuse to follow the idea, instead spending all our time squinting at the available evidence, trying to discern what’s true. Or we can accept that no-one yet knows, and instead work hard on developing the big, bold idea, in the understanding that while we have no guarantee of success, it is only thus that our understanding advances.

With all that said, in its most optimistic form, I don’t believe we’ll ever find a simple algorithm for intelligence. To be more concrete, I don’t believe we’ll ever find a really short Python (or C or Lisp, or whatever) program – let’s say, anywhere up to a thousand lines of code – which implements artificial intelligence. Nor do I think we’ll ever find a really easily-described neural network that can implement artificial intelligence. But I do believe it’s worth acting as though we could find such a program or network. That’s the path to insight, and by pursuing that path we may one day understand enough to write a longer program or build a more sophisticated network which does exhibit intelligence. And so it’s worth acting as though an extremely simple algorithm for intelligence exists.

In the 1980s, the eminent mathematician and computer scientist Jack Schwartz was invited to a debate between artificial intelligence proponents and artificial intelligence skeptics. The debate became unruly, with the proponents making over-the-top claims about the amazing things just round the corner, and the skeptics doubling down on their pessimism, claiming artificial intelligence was outright impossible. Schwartz was an outsider to the debate, and remained silent as the discussion heated up. During a lull, he was asked to speak up and state his thoughts on the issues under discussion. He said: “Well, some of these developments may lie one hundred Nobel prizes away” (ref, page 22). It seems to me a perfect response. The key to artificial intelligence is simple, powerful ideas, and we can and should search optimistically for those ideas. But we’re going to need many such ideas, and we’ve still got a long way to go!

☆☆☆☆☆

 

 

 

 

 

 

 

 

 

 

 

輕。鬆。學。部落客