Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 116
6 月 | 2017 | FreeSandal | 第 5 頁

GoPiGo 小汽車︰格點圖像算術《導言》

若是人們不解釋『光』的『物理定律』與『成像條件』等等,只是說『眼睛』所見之『影像』和『鏡頭』取樣之『圖像』原理相類,都可以用『格點像素』『光強』表達 …… ,恐怕許多『概念』容易語焉不詳,會有『望文生義』誤讀現象。

由於過去曾以若干文本講過 □ ○︰

日日》唐‧李商隱

日日春光鬥日光,
山城斜路杏花香。
幾時心緒渾無事,
得及游絲百尺長。

日日神經網絡相傍,遲早難免精神緊張,何不回眸來個轉向,欣賞成景出色日光。這光來歷身份非常,能使黑暗無處潛藏,傳聞開天闢地的頭一日,神說、要有光、就有了光。

 

……

怎不就趁暑休陽豔之際,前往光的世界,藉著程式、相機、透鏡、樹莓派,親眼目睹驗證神奇!!探究光的簡單行徑之理,竟能譜出曼妙動人之姿,果然耶量大自然生美,隨機虹霓成象??

─── 摘自《光的世界︰引言

 

此處只列出綱目︰

光的世界︰幾何光學一

光的世界︰派生科學計算一

光的世界︰矩陣光學一

 

方便舊雨新知瀏覽而已。

這時效法 Jan Erik Solem 先生大著

Programming Computer Vision with Python

序言中之寫作精神︰

In short: act as a source of inspiration for those interested in programming computer vision applications.

About PCV

PCV is a pure Python library for computer vision based on the book “Programming Computer Vision with Python” by Jan Erik Solem.

More details on the book (and a pdf version of the latest draft) can be found at programmingcomputervision.com.

Dependencies

You need to have Python 2.6+ and as a minimum:

Some parts use:

Many sections show applications that require smaller specialized Python modules. See the book or the individual examples for full list of these dependencies.

Structure

PCV/ the code.

pcv_book/ contains a clean folder with the code exactly as used in the book at time of publication.

examples/ contains sample code. Some examples use data available at programmingcomputervision.com.

Installation

Open a terminal in the PCV directory and run (with sudo if needed on your system):

python setup.py install


Now you should be able to do

import PCV


in your Python session or script. Try one of the sample code examples to check that the installation works.

License

All code in this project is provided as open source under the BSD license (2-clause “Simplified BSD License”). See LICENSE.txt.


-Jan Erik Solem

 

補點『入門文字』,不亦宜乎!!??

 

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《七之少陽》

心想幫小汽車圓夢難以哉!不知

Image processing

In imaging science, image processing is processing of images using mathematical operations by using any form of signal processing for which the input is an image, a series of images, or a video, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image.[1] Most image-processing techniques involve isolating the individual color planes of an image and treating them as two-dimensional signal and applying standard signal-processing techniques to them. Images are also processed as three-dimensional signals with the third-dimension being time or the z-axis.

Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging.[2]

Closely related to image processing are computer graphics and computer vision. In computer graphics, images are manually made from physical models of objects, environments, and lighting, instead of being acquired (via imaging devices such as cameras) from natural scenes, as in most animated movies. Computer vision, on the other hand, is often considered high-level image processing out of which a machine/computer/software intends to decipher the physical contents of an image or a sequence of images (e.g., videos or 3D full-body magnetic resonance scans).

In modern sciences and technologies, images also gain much broader scopes due to the ever growing importance of scientific visualization (of often large-scale complex scientific/experimental data). Examples include microarray data in genetic research, or real-time multi-asset portfolio trading in finance. Microscope image processing specializes in the processing of images obtained by microscope.

 

離《W!o+ 的《小伶鼬工坊演義》︰神經網絡與深度學習【引言】》文本序列幾許遠耶?何況那時只想當 Michael Nielsen 先生之著作的說書人乎??莫非『芒種』時節以至,『光』已生『』芒!!多日雖然不見『星光』,夜空彷彿總回盪著曲調嘟囔︰

魯冰花

作詞:姚謙
作曲:陳陽
編曲:陳揚

我知道 半夜的星星會唱歌
想家的夜晚 它就這樣和我一唱一和
我知道 午後的清風會唱歌
童年的蟬聲 它總是跟風一唱一和

當手中握住繁華 心情卻變得荒蕪
才發現世上 一切都會變卦
當青春剩下日記 烏絲就要變成白髮
不變的只有那首歌 在心中來回的唱

天上的星星不說話 地上的娃娃想媽媽
天上的眼睛眨呀眨 媽媽的心呀魯冰花
家鄉的茶園開滿花 媽媽的心肝在天涯
夜夜想起媽媽的話 閃閃的淚光魯冰花
啊~ 閃閃的淚光魯冰花

天上的星星不說話 地上的娃娃想媽媽
天上的眼睛眨呀眨 媽媽的心呀魯冰花
家鄉的茶園開滿花 媽媽的心肝在天涯
夜夜想起媽媽的話 閃閃的淚光魯冰花
啊~ 啊~ 夜夜想起媽媽的話 閃閃的淚光魯冰花
啊~ 啊~ 夜夜想起媽媽的話 閃閃的淚光

 

終究也曾少年陽剛,不畏『原鬼』、『行難』︰

少所見則多所怪,見駱駝言馬腫背。韓愈作『五原』論︰《原道》 、《原性》、《原人》、《原毀》、《原鬼》。之所以終於

【原鬼】

李 石曰:「退之作《原鬼》,與晉阮千里相表裏。至作《羅池碑》欲以鬼威喝人,是為子厚求食也。《送窮文》雖出游戲,皆自叛其說也。退之以長慶四年寢疾,帝遣 神召之曰:『骨使世與韓氏相仇,欲同力討之,天帝之兵欲行陰誅,乃更藉人力乎?』當是退之數窮識亂,為鬼所乘,不然,平生強聒,至死無用。」

有嘯於梁,「於梁」、「於堂」下,一本各有「者」。從而燭之,無見也。斯鬼乎?曰:非也,鬼無聲。有立於堂,從而視之,無見也。斯鬼乎?曰:非也,鬼無形。有觸吾躬,從而執之,無得也。斯鬼乎?曰:非也,鬼無聲與形,安有氣。「鬼無聲與形」上,或有「鬼無氣」三字,非是。曰:鬼無聲也,無形也,無氣也,果無鬼乎?曰:有形而無聲者,物有之矣,土石是也;有聲而無形者,物有之矣,風霆是也;有聲與形者,物有之矣,人獸是也;無聲與形者,物有之矣,鬼神是也。李石曰:「公子彭生托形於豕,晉文公托聲如牛,韓子謂鬼無聲與形,未盡也。」曰:然則有怪而與民物接者,何也?曰:是有二:有鬼,有物。有怪或作見怪,二下或有說字;或有說字,而無「有鬼有物」四字。漠然無形與聲者,鬼之常也。民有忤於天,有違於民,上民字一作人,下民字或作時。有爽於物,逆於倫,而感於氣,於是乎鬼有形於形,有形或作有托。有憑於聲以應之,而下殃禍焉,皆民之為之也。為下或無之字。其既也,又反乎其常。曰:何謂物?曰:成於形與聲者 ,土石、風霆、人獸是也;反乎無聲與形者,鬼神是也;反乎或作反其,非是。不能有形與聲,不能無形與聲者,物怪是也。或無「不能有形與聲」六字,或無「不能無形與聲」六字。故其作而接於民也無恆,故有動於民而為禍,亦有動於民而為福,本 或先言為福。按《左氏》、《國語》:「周惠王十五年,有神降於莘。王問諸內史過,對曰云云。有得神以興,亦有以亡。夏之興也,祝融降於崇山;其亡也,回祿 信於聆隧。商之興也,勹淮嗚山;其亡也,夷羊在牧。周之興也,蝶弄C於歧山;其衰,以杜伯射於於高阜。」動於民而為禍福,其斯之謂歟?亦有動於民而莫之為禍福,適丁民之有是時也。作《原鬼》。閣、蜀、粹無作字。今按:古書篇題多在後者,如《荀子》諸賦正此類也。但此篇前已有題,不應複出,故且從諸本存作字。

 

,欲初五接『財神』!!初六想《送窮》乎??

古來『知』、『行』二字,添上『難』、『易』判語,加了『先』、『後』助詞,不曉多少文章??當真『行道難』也!!

【行難】

行,下孟切。公《與祠部陸參員外書》,在貞元十八年。此篇言參自越州召拜祠部員外郎,豈在前歟?參字公佐云。

或問「行孰難?」曰:「舍我之矜,從爾之稱,孰能之。」曰:「陸先生參,何如?」按:《李習之集》,參作人參。曰:「先生之賢聞天下,是是而非非。聞下或有於字。貞元中,自越州徵拜祠部員外郎 ,京師之人日造焉,閉門而拒之滿街。愈嘗往間客席,嘗或作常。間或作問。客或作賓。席下或有坐定二字。先生矜語其客曰:『某胥也,某商也,其生某任之,其死某誄之,某與某可人也,可或作何。或從閣、杭、苑作可,云:「可人見《禮記》,鄭注曰:此人可也。」今按:據《禮記》是也。然詳下文韓公之語,似以陸公雖嘗任誄此人,複自疑於有罪,則頗有薄其門地之意。而以薦引之力自多者,恐須作何字,語勢乃協。更詳之。任與誄也非罪歟?』皆曰:『然。』也或作之。罪一作過。曰上或有應字。愈曰:『某之胥,某之商,其得任與誄也,有由乎?抑有罪不足任而誄之邪? 』任而誄或作誄而任。而或作與。先生曰:『否,吾惡其初,惡去聲。不然,任與誄也何尤。』愈曰:『苟如是,先生之言過矣!昔者管敬子取盜二人為大夫於公,《禮記》:「管仲遇盜,取二人焉,上以為公臣,曰:『其所與由闢也,可人也。』」敬子,仲之謚也。趙文子舉管庫之士七十有餘家,《禮記》:「趙文子所舉於晉國,管庫之士七十有餘家。」夫惡求其初?』惡音烏。先生曰:『不然,彼之取者賢也。』愈曰:『先生之所謂賢者,大賢歟?抑賢於人之賢歟?齊也,晉也,且有二與七十 ,而可謂今之天下無其人邪?而可上或有焉字,邪上或有也字。先生之選人也已詳。』先生曰:『然。』愈曰:『聖人不世出,賢人不時出 ,千百歲之間倘有焉;聖人賢人,人,或皆作之,或並有人之二字。世出或作世生,百歲或作百年。不幸而有出於胥商之族者,先生之說傳,吾不忍赤子之不得乳於其母也!』先生曰:『然。』乳於或無於字。他日又往坐焉。或無坐字。先生曰:『今之用人也不詳。位乎朝者,吾取某與某而已,在下者多於朝,凡吾與者若干人。』愈曰:『先生之與者,盡於此乎?其皆賢乎?抑猶有舉其多而缺其少乎?』或無「其皆賢乎」四字。缺或作沒。少或作細,或作一。少下或有者字。今按:此言人之才或不全備,姑舉其可取之多,而略其可棄之少也。先生曰:『固然,吾敢求其全。』其或作於。今按:作其語意為近,但陸公此句正不敢必求全才之意,而下文韓公又以太詳而不早責之,殊不可曉,當更考之。愈曰:『由宰相至百執事凡幾位?由一方至一州凡幾位?先生之得者,無乃不足充其位邪 ?其位下或有也字。不早圖之,一朝而舉焉。今雖詳,其後用也必粗。 』舉焉或作索之,詳下或有且微字,非是。粗,聰徂切。先生曰:『然。子之言,孟軻不如。』」《文錄》作「退語其人曰,乃今吾見孟軻」。

── 摘自《W!o+ 的《小伶鼬工坊演義》︰神經網絡【深度學習】一

 

膽敢開講『格點圖像算術』哩☆

 

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《六》

原本順著篇章,此時該介紹『Canny 邊緣檢測算子

Canny edge detector

The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works.

Development of the Canny algorithm

Canny edge detection is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. Canny has found that the requirements for the application of edge detection on diverse vision systems are relatively similar. Thus, an edge detection solution to address these requirements can be implemented in a wide range of situations. The general criteria for edge detection includes:

  1. Detection of edge with low error rate, which means that the detection should accurately catch as many edges shown in the image as possible
  2. The edge point detected from the operator should accurately localize on the center of the edge.
  3. A given edge in the image should only be marked once, and where possible, image noise should not create false edges.

To satisfy these requirements Canny used the calculus of variations – a technique which finds the function which optimizes a given functional. The optimal function in Canny’s detector is described by the sum of four exponential terms, but it can be approximated by the first derivative of a Gaussian.

Among the edge detection methods developed so far, Canny edge detection algorithm is one of the most strictly defined methods that provides good and reliable detection. Owing to its optimality to meet with the three criteria for edge detection and the simplicity of process for implementation, it became one of the most popular algorithms for edge detection.

The original image.

The Canny edge detector applied to a color photograph of a steam engine.

 

講點 OpenCV

Feature Detection

Canny

Finds edges in an image using the [Canny86] algorithm.

C++: void Canny(InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false )
Python: cv2.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) → edges
C: void cvCanny(const CvArr* image, CvArr* edges, double threshold1, double threshold2, int aperture_size=3 )
Parameters:
  • image – 8-bit input image.
  • edges – output edge map; single channels 8-bit image, which has the same size as image .
  • threshold1 – first threshold for the hysteresis procedure.
  • threshold2 – second threshold for the hysteresis procedure.
  • apertureSize – aperture size for the Sobel() operator.
  • L2gradient – a flag, indicating whether a more accurate L_2 norm =\sqrt{(dI/dx)^2 + (dI/dy)^2} should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default L_1 norm =|dI/dx|+|dI/dy| is enough ( L2gradient=false ).

The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector

Note

  • An example on using the canny edge detector can be found at opencv_source_code/samples/cpp/edge.cpp
  • (Python) An example on using the canny edge detector can be found at opencv_source_code/samples/python/edge.py

 

用法,朝向目標前進。或因近日『豪大雨』,惹人心神不寧,還是小汽車被雷打到?居然說︰它用『光流法』,追隨『木牛流馬』,閃電畫處一時不見牛馬蹤跡,無目的之漫蕩至『古柏』處

300px-GeraniumFlowerUnfurl2

古柏高士

唐‧杜甫古柏行

孔明廟前有老柏,柯如青銅根如石。
霜皮溜雨四十圍,黛色参天二千尺。
君臣已與時際會,樹木猶為人愛惜。
雲来氣接巫峽長,月出寒通雪山白。
憶昨路繞錦亭東,先主武侯同閟宫。
崔嵬枝干郊原古,窈窕丹青户牖空。
落落盤踞雖得地,冥冥孤高多烈風。
扶持自是神明力,正直原因造化功。
大厦如傾要梁棟,萬牛回首丘山重。
不露文章世已惊,未辭翦伐誰能送。
苦心豈免容螻蟻,香葉終經宿鸞鳳。
志士幽人莫怨嗟,古來材大難為用。

時光流影是能夠回頭的嗎?

─── 摘自《時間是什麼??

 

恰遇『參圓格物』的洛水『小神龜』︰

據說小海龜 『參圓格物』多時,終於發現『直達』 goto 和『轉向』 turn 的匯通處。由於言之未詳史無所載,此刻強為解說,難免生吞活剝也 。簡言之︰畫圓之理,『曲率』處處均一,故而重複進一步 forward 1 恆向 轉一度 right 1 【※ or left 1 】可以擬 ○ 矣。

─── 摘自《L4K ︰ Python Turtle《八》

 

告知以『大道務本』,否則勢必『失之豪釐,差以千里』︰

或許出於《周髀算經

「髀」,拼音:bì,注音:ㄅㄧˋ,也簡稱《周髀》,是中國古代一本數學專業書籍,在中國唐代收入《算經十書》,並為《十經》的第一部。

周髀的成書年代至今沒有統一的說法,有人認為是周公所作,也有人認為是在西漢末年寫成。

周髀算經》是中國歷史上最早的一部天文曆算著作,也是中國流傳至今最早的數學著作,是後世數學的源頭,其算術化傾向決定中國數學發展的性質,歷代數學家奉為經典。

周髀算經》《卷上》

昔者周公問於商高曰:「竊聞乎大夫善數也,請問古者包犧周天曆度。夫天不可階而升,地不可得尺寸而度。請問數安從出

商高曰:「數之法,出於圓方。圓出於方,方出於矩。矩出於九九八十一。故折矩,以為句廣三,股脩四,徑隅五。既方之外,半其一矩。環而共盤,得成三、四、五。兩矩共長二十有五,是謂積矩。故禹之所以治天下者,此數之所生也。

句股圓方圖:

句股圓方圖1

句股圓方圖2

周公曰:「大哉言數!請問用矩之道?」

商高曰:「平 矩以正繩,偃矩以望高,覆矩以測深,臥矩以知遠,環矩以為圓,合矩以為方。方屬地,圓屬天,天圓地方。方數為典,以方出圓。笠以寫天。天青黑,地黃赤。天 數之為笠也,青黑為表,丹黃為裏,以象天地之位。是故知地者智,知天者聖。智出於句,句出於矩。夫矩之於數,其裁制萬物,唯所為耳。周公曰:「善哉!」

昔 者榮方問於陳子,曰:「今者竊聞夫子之道。知日之高大,光之所照,一日所行,遠近之數,人所望見,四極之窮,列星之宿,天地之廣袤,夫子之道皆能知之。其 信有之乎?」陳子曰:「然。」榮方曰:「方雖不省,願夫子幸而說之。今若方者可教此道邪?」陳子曰:「然。此皆算術之所及。子之於算,足以知此矣。若誠累 思之。」

於是榮方歸而思之,數日不能得。復見陳子曰:「方思之不能得,敢請問之。」陳子曰:「思之未熟。此亦望遠起高之術,而子不能得,則 子之於數,未能通類。是智有所不及,而神有所窮。夫道術,言約而用愽者,智類之明。問一類而以萬事達者,謂之知道。今子所學,算數之術,是用智矣,而尚有 所難,是子之智類單。夫道術所以難通者,既學矣,患其不博。既博矣,患其不習。既習矣,患其不能知。故同術相學,同事相觀。此列士之愚智,賢不肖之所分。 是故能類以合類,此賢者業精習智之質也。夫學同業而不能入神者,此不肖無智而業不能精習。是故算不能精習,吾豈以道隱子哉?固復熟思之。」

榮 方復歸,思之,數日不能得。復見陳子曰:「方思之以精熟矣。智有所不及,而神有所窮,知不能得。願終請說之。」陳子曰:「復坐,吾語汝。」於是榮方復坐而 請。陳子說之曰:「夏至南萬六千里,冬至南十三萬五千里,日中立竿測影。此一者天道之數。周髀長八尺,夏至之日晷一尺六寸。髀者,股也。正晷者,句也。正 南千里,句一尺五寸。正北千里,句一尺七寸。日益表南,晷日益長。候句六尺,即取竹,空徑一寸,長八尺,捕影而視之,空正掩日,而日應空之孔。由此觀之, 率八十寸而得徑一寸。故以句為首,以髀為股。從髀至日下六萬里,而髀無影。從此以上至日,則八萬里。若求邪至日者,以日下為句,日高為股。句、股各自乘, 并而開方除之,得邪至日,從髀所旁至日所十萬里。以率率之,八十里得徑一里。十萬里得徑千二百五十里。故曰,日晷徑千二百五十里。」

日高圖

400px-Rigaotu1213CE

法曰:「周髀長八尺,句之損益寸千里。故曰:極者,天廣袤也。今立表高八尺以望極,其句一丈三寸。由此觀之,則從周北十萬三千里而至極下。」榮方曰:「周髀者何?」

陳 子曰:「古時天子治周,此數望之從周,故曰周髀。髀者,表也。日夏至南萬六千里,日冬至南十三萬五千里,日中無影。以此觀之,從南至夏至之日中十一萬九千 里。北至其夜半亦然。凡徑二十三萬八千里。此夏至日道之徑也,其周七十一萬四千里。從夏至之日中,至冬至之日中十一萬九千里。北至極下亦然。則從極南至冬 至之日中二十三萬八千里。從極北至其夜半亦然。凡徑四十七萬六千里。此冬至日道徑也,其周百四十二萬八千里。從春秋分之日中北至極下十七萬八千五百里。從 極下北至其夜半亦然。凡徑三十五萬七千里,周一百七萬一千里。故曰:月之道常緣宿,日道亦與宿正。南至夏至之日中,北至冬至之夜半,南至冬至之日中,北至 夏至之夜半,亦徑三十五萬七千里,周一百七萬一千里。

「春分之日夜分以至秋分之日夜分,極下常有日光。秋分之日夜分以至春分之日夜分,極下 常無日光。故春秋分之日夜分之時,日所照適至極,陰陽之分等也。冬至、夏至者,日道發歛之所生也至,晝夜長短之所極。春秋分者,陰陽之脩,晝夜之象。晝者 陽,夜者陰。春分以至秋分,晝之象。秋分至春分,夜之象。故春秋分之日中光之所照北極下,夜半日光之所照亦南至極。此日夜分之時也。故曰:日照四旁各十六 萬七千里。

「人望所見,遠近宜如日光所照。從周所望見北過極六萬四千里,南過冬至之日三萬二千里。夏至之日中,光南過冬至之日中光四萬八千 里,南過人所望見一萬六千里,北過周十五萬一千里,北過極四萬八千里。冬至之夜半日光南不至人所見七千里,不至極下七萬一千里。夏至之日中與夜半日光九萬 六千里過極相接。冬至之日中與夜半日光不相及十四萬二千里,不至極下七萬一千里。夏至之日正東西望,直周東西日下至周五萬九千五百九十八里半。冬至之日正 東西方不見日。以算求之,日下至周二十一萬四千五百五十七里半。凡此數者,日道之發歛。冬至、夏至,觀律之數,聽鐘之音。冬至晝,夏至夜。差數及,日光所 還觀之,四極徑八十一萬里,周二百四十三萬里。

「從周至南日照處三十萬二千里,周北至日照處五十萬八千里,東西各三十九萬一千六百八十三里 半。周在天中南十萬三千里,故東西矩中徑二萬六千六百三十二里有奇。周北五十萬八千里。冬至日十三萬五千里。冬至日道徑四十七萬六千里,周一百四十二萬八 千里。日光四極當周東西各三十九萬一千六百八十三里有奇。」

此方圓之法。

萬物周事而圓方用焉,大匠造制而規矩設焉,或毀方而為圓,或破圓而為方。方中為圓者謂之圓方,圓中為方者謂之方圓也。

七衡圖

400px-Qihengtu1213CE

凡為此圖,以丈為尺,以尺為寸,以寸為分,分一千里。凡用繒方八尺一寸。今用繒方四尺五分,分為二千里。

呂氏曰:「凡四海之內,東西二萬八千里,南北二萬六千里。」

凡 為日月運行之圓周,七衡周而六間,以當六月節。六月為百八十二日、八分日之五。故日夏至在東井極內衡,日冬至在牽牛極外衡也。衡復更終冬至。故曰:一歲三 百六十五日、四分日之一,一歲一內極,一外極。三十日、十六分日之七,月一外極,一內極。是故衡之間萬九千八百三十三里、三分里之一,即為百步。欲知次衡 徑,倍而增內衡之徑。二之以增內衡徑。次衡放此。

內一衡徑二十三萬八千里,周七十一萬四千里。分為三百六十五度、四分度之一,度得一千九百五十四里二百四十七步、千四百六十一分步之九百三十三。

次二衡徑二十七萬七千六百六十六里二百步,周八十三萬三千里。分里為度,度得二千二百八十里百八十八步、千四百六十一分步之千三百三十二。

次三衡徑三十一萬七千三百三十三里一百步,周九十五萬二千里。分為度,度得二千六百六里百三十步、千四百六十一分步之二百七十。

次四衡徑三十五萬七千里,周一百七萬一千里。分為度,度得二千九百三十二里七十一步、千四百一十分步之六百六十九。

次五衡徑三十九萬六千六百六十六里二百步,周一百一十九萬里。分為度,度得三千二百五十八里十二步、千四百六十一分步之千六十八。

次六衡徑四十三萬六千三百三十三里一百步,周一百三十萬九千里。分為度,度得三千五百八十三里二百五十四步、千四百六十一分步之六。

次七衡徑四十七萬六千里,周一百四十二萬八千里。分為度,得三千九百九里一百九十五步、千四百六十一分步之四百五。

其 次,日冬至所北照,過北衡十六萬七千里。為徑八十一萬里,周二百四十三萬里。分為三百六十五度四分度之一,度得六千六百五十二里二百九十三步、千四百六十 一分步之三百二十七。過此而往者,未之或知。或知者,或疑其可知,或疑其難知。此言上聖不學而知之。故冬至日晷丈三尺五寸,夏至日晷尺六寸。冬至日晷長, 夏至日晷短。日晷損益,寸差千里。故冬至、夏至之日,南北遊十一萬九千里,四極徑八十一萬里,周二百四十三萬里。分為度,度得六千六百五十二里二百九十三 步、千四百六十一分步之三百二十七。此度之相去也。

其南北游,日六百五十一里一百八十二步、一千四百六十一分步之七百九十八。

術曰:置十一萬九千里為實,以半歲一百八十二日、八分日之五為法,而通之,得九十五萬二千,為實。所得一千四百六十一為法,除之。實如法得一里。不滿法者,三之,如法得百步。不滿法者,十之,如法得十步。不滿法者,十之,如法得一步。不滿法者,以法命之。

的求『日高』之法,大概就是

失之豪釐,差以千里

的成因。

300px-海岛算经

四庫全書海島算經

220px-Sea_island_survey

如果用《海島算經

三國時代魏國數學家劉徽所著的測量學著作,原為《劉徽九章算術注》第九卷勾股章內容的延續和發展,名為《九章重差圖》,附於《劉徽九章算術注》 之後作為第十章。唐代將《重差》從《九章》分離出來,單獨成書,按第一題今有望海島」,取名為《海島算經》,是《算經十書》之一。

劉徽《海島算經》「使中國測量學達到登峰造極的地步」,使「中國在數學測量學的成就,超越西方約一千年」(美國數學家弗蘭克·斯委特茲語)

之圖來作『三角測量』的計算︰

\overline{GH} = D
\overline{BG} = X
\overline{AB} = H
 \angle AHB = \alpha
 \angle AGB = \beta

\tan(\alpha) = \frac{H}{D + X}
\tan(\beta) = \frac{H}{X}

Sea_Island_Measurement

可以得到

 H = D \cdot \tan(\alpha) \cdot \frac{1}{1 - \frac{\tan(\alpha)}{\tan(\beta)}}

然而『天很高,日很遠』,因此 \angle \beta \approx \angle \alpha ,故而很難『度量』的『精準』,一點點『角度』之『誤差』就產生了那個

失之豪釐,差以千里

的吧!!

─── 摘自《失之豪釐,差以千里!!《上》

 

何況縱會『直行』、『追跡』,且不知『轉向』、『感測』之法,未自主『行不得』也。還不快求『智慧』去。

小汽車言畢問到︰如何能得『智慧』?

當真

三更有夢書當枕,

午夜何因怕夢回。

怎曉已答之以勤『讀書』哩

Programming Computer Vision with Python

PCV – an open source Python module for computer vision


PCV is a pure Python library for computer vision based on the book “Programming Computer Vision with Python” by Jan Erik Solem.
Book coverAvailable from Amazon and O’Reilly.


The final pre-production draft of the book (as of March 18, 2012) is available under a Creative Commons license. Note that this version does not have the final copy edits and last minute fixes. If you like the book, consider supporting O’Reilly and me by purchasing the official version.

The final draft pdf is here.

 

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《五》

自由之世界需要活潑的想法︰

Computer Vision platform using Python.

What is it?

SimpleCV is an open source framework for building computer vision applications. With it, you get access to several high-powered computer vision libraries such as OpenCV – without having to first learn about bit depths, file formats, color spaces, buffer management, eigenvalues, or matrix versus bitmap storage. This is computer vision made easy.

 

創造人人可以參與的多元學習環境︰

SimpleCV Tutorial

About

SimpleCV is an open source framework — meaning that it is a collection of libraries and software that you can use to develop vision applications. It lets you work with the images or video streams that come from webcams, Kinects, FireWire and IP cameras, or mobile phones. It’s helps you build software to make your various technologies not only see the world, but understand it too. SimpleCV is free to use, and because it’s open source, you can also modify the code if you choose to. It’s written in Python, and runs on Mac, Windows, and Ubuntu Linux. It’s developed by the engineers at Sight Machine, and it’s licensed under the BSD license.Note: These examples are written for SimpleCV version 1.3 or greater. Certain functions may not work in earlier versions. For best results, download the latest version.

 

此處給出樹莓派派生二 python2 的 SimpleCV 安裝方法︰

sudo apt-get install ipython
sudo pip install scipy
sudo pip install numpy
sudo apt-get install python-opencv
sudo pip install https://github.com/sightmachine/SimpleCV/zipball/master
sudo pip install svgwrite
sudo apt-get install lsof

 

並驗之以

Getting Started

範例︰

sudo modprobe bcm2835-v4l2
pi@raspberrypi:~ $ python
Python 2.7.9 (default, Sep 17 2016, 20:26:04) 
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from SimpleCV import Camera
>>> cam = Camera()
>>> while True:
...     img = cam.getImage()
...     img = img.binarize()
...     img.drawText("Hello World!")
...     img.show()
... 
<SimpleCV.Display Object resolution:((640, 480)), Image Resolution: (640, 480) at memory location: (0x6e355aa8)>
<SimpleCV.Display Object resolution:((640, 480)), Image Resolution: (640, 480) at memory location: (0x6e355c60)>
<SimpleCV.Display Object resolution:((640, 480)), Image Resolution: (640, 480) at memory location: (0x6e355da0)>

 

 

好學者或喜多讀書︰


Practical Computer Vision with SimpleCV
The SimpleCV book was written to supplement the SimpleCV framework.  The book is highly recommended if you are new to either SimpleCV or computer vision in general.  It will serve as a launch point for you to dive into learning SimpleCV.  You can always refer to the tutorial for a very basic introduction, but the book will give much broader insight into computer and machine vision applications so you can start writing your own.
 
 
Back of Book:
Learn how to build your own computer vision (CV) applications quickly and easily with SimpleCV, an open source framework written in Python. Through examples of real-world applications, this hands-on guide introduces you to basic CV techniques for collecting, processing, and analyzing streaming digital images. You’ll then learn how to apply these methods with SimpleCV, using sample Python code. All you need to get started is a Windows, Mac, or Linux system, and a willingness to put CV to work in a variety of ways. Programming experience is optional.
  • Capture images from several sources, including webcams, smartphones, and Kinect
  • Filter image input so your application processes only necessary information
  • Manipulate images by performing basic arithmetic on pixel values
  • Use feature detection techniques to focus on interesting parts of an image
  • Work with several features in a single image, using the NumPy and SciPy Python libraries
  • Learn about optical flow to identify objects that change between two image frames
  • Use SimpleCV’s command line and code editor to run examples and test techniques

Purchase the SimpleCV Book

File(s):

 

樂看程式碼吧☆

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《四》

然而即使小汽車有『雷射指標』,處於

火星探測

火星探測是指人類通過向火星發射太空探測器,對火星進行的科學探測活動。人類從1600年代開始使用望遠鏡對火星進行觀測。美國的水手4號於1964年12月28日發射升空,這是有史以來第一枚成功到達火星並發回數據的探測器。

Sojourner takes its Alpha Proton X-ray Spectrometer measurement of the Yogi Rock

動畫展示各個火星探測器的登陸點

概述

火星是太陽系八大行星之一,按離太陽由近及遠的次序排列為第四顆。在太陽系八大行星之中,火星也是除了金星以外,距離地球最近的行星。大約每隔26個月就會發生一次火星沖日,地球與火星的距離在沖日期間會達到極近值,通常只有不足1億千米,而在火星發生大衝時,這個距離甚至不足6000萬千米。火星沖日意味著這時可以使用較小花費將探測器送往火星,火星探測通常也會利用此天文現象來運作。

到目前為止,已經有超過30枚探測器到達過火星,它們對火星進行了詳細的考察,並向地球發回了大量數據。同時火星探測也充滿了坎坷,大約三分之二的 探測器,特別是早期發射的探測器,都沒有能夠成功完成它們的使命。但是火星對於人類卻有一種特殊的吸引力,因為它是太陽系中最近似地球的天體之一。火星赤 道平面與公轉軌道平面的交角非常接近於地球,這使它也有類似地球的四季交替,同時,火星的自轉周期為24小時37分,這使火星上的一天幾乎和地球上的一樣 長。

 

環境,恐怕前途渺渺茫茫,難有用武之地也。那麼可有方法能從『動態影像』中推估『運動』呢?

Motion estimation

Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion is in three dimensions but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.

Motion vectors that result from a movement into the  z-plane of the image, combined with a lateral movement to the lower-right. This is a visualization of the motion estimation performed in order to compress an MPEG movie.

Related terms

More often than not, the term motion estimation and the term optical flow are used interchangeably. It is also related in concept to image registration and stereo correspondence. In fact all of these terms refer to the process of finding corresponding points between two images or video frames. The points that correspond to each other in two views (images or frames) of a real scene or object are “usually” the same point in that scene or on that object. Before we do motion estimation, we must define our measurement of correspondence, i.e., the matching metric, which is a measurement of how similar two image points are. There is no right or wrong here; the choice of matching metric is usually related to what the final estimated motion is used for as well as the optimisation strategy in the estimation process.

 

反思『運動』是『立體』的!投射到『平面』之『圖框』上?焉能完全『適切』耶??

此所以『光流法』探討相對於『觀察者』之『顯運動』 Apparent motion 哩!!

Optical flow

Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene.[1][2] The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world.[3] Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.[4]

The term optical flow is also used by roboticists, encompassing related techniques from image processing and control of navigation including motion detection, object segmentation, time-to-contact information, focus of expansion calculations, luminance, motion compensated encoding, and stereo disparity measurement.[5][6]

 
The optic flow experienced by a rotating observer (in this case a fly). The direction and magnitude of optic flow at each location is represented by the direction and length of each arrow.

Estimation

Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements.[6] Fleet and Weiss provide a tutorial introduction to gradient based optical flow.[7] John L. Barron, David J. Fleet, and Steven Beauchemin provide a performance analysis of a number of optical flow techniques. It emphasizes the accuracy and density of measurements.[8]

The optical flow methods try to calculate the motion between two image frames which are taken at times  t+\Delta t at every voxel position. These methods are called differential since they are based on local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates.

For a 2D+t dimensional case (3D or n-D cases are similar) a voxel at location  (x,y,t) with intensity  I(x,y,t) will have moved by  \Delta x \Delta y and  \Delta t between the two image frames, and the following brightness constancy constraint can be given:

  I(x,y,t) = I(x+\Delta x, y + \Delta y, t + \Delta t)

Assuming the movement to be small, the image constraint at  I(x,y,t) with Taylor series can be developed to get:

  I(x+\Delta x,y+\Delta y,t+\Delta t) = I(x,y,t) + \frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t+H.O.T.

From these equations it follows that:

  \frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t = 0

or

  \frac{\partial I}{\partial x}\frac{\Delta x}{\Delta t}+\frac{\partial I}{\partial y}\frac{\Delta y}{\Delta t}+\frac{\partial I}{\partial t}\frac{\Delta t}{\Delta t} = 0

which results in

  \frac{\partial I}{\partial x}V_x+\frac{\partial I}{\partial y}V_y+\frac{\partial I}{\partial t} = 0

where  V_x,V_y are the  x and  y components of the velocity or optical flow of  I(x,y,t) and  \tfrac{\partial I}{\partial x} \tfrac{\partial I}{\partial y} and  \tfrac{\partial I}{\partial t} are the derivatives of the image at  (x,y,t) in the corresponding directions.  I_{x},   I_y and   I_t can be written for the derivatives in the following.

Thus:

  I_xV_x+I_yV_y=-I_t

or

  \nabla I^T\cdot\vec{V} = -I_t

This is an equation in two unknowns and cannot be solved as such. This is known as the aperture problem of the optical flow algorithms. To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow.

 

就算如是,都還得注意『視野』呀!!

The aperture problem

The aperture problem. The grating appears to be moving down and to the right, perpendicular to the orientation of the bars. But it could be moving in many other directions, such as only down, or only to the right. It is impossible to determine unless the ends of the bars become visible in the aperture.

Each neuron in the visual system is sensitive to visual input in a small part of the visual field, as if each neuron is looking at the visual field through a small window or aperture. The motion direction of a contour is ambiguous, because the motion component parallel to the line cannot be inferred based on the visual input. This means that a variety of contours of different orientations moving at different speeds can cause identical responses in a motion sensitive neuron in the visual system.

Individual neurons early in the visual system (V1) respond to motion that occurs locally within their receptive field. Because each local motion-detecting neuron will suffer from the aperture problem, the estimates from many neurons need to be integrated into a global motion estimate. This appears to occur in Area MT/V5 in the human visual cortex.

See MIT example

 

若是只讀 OpenCV 範例︰

Optical Flow

Goal

In this chapter,
  • We will understand the concepts of optical flow and its estimation using Lucas-Kanade method.
  • We will use functions like cv2.calcOpticalFlowPyrLK() to track feature points in a video.

Optical Flow

Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movemement of object or camera. It is 2D vector field where each vector is a displacement vector showing the movement of points from first frame to second. Consider the image below (Image Courtesy: Wikipedia article on Optical Flow).

Optical Flow

It shows a ball moving in 5 consecutive frames. The arrow shows its displacement vector. Optical flow has many applications in areas like :

  • Structure from Motion
  • Video Compression
  • Video Stabilization …

Optical flow works on several assumptions:

  1. The pixel intensities of an object do not change between consecutive frames.
  2. Neighbouring pixels have similar motion.

Consider a pixel I(x,y,t) in first frame (Check a new dimension, time, is added here. Earlier we were working with images only, so no need of time). It moves by distance (dx,dy) in next frame taken after dt time. So since those pixels are the same and intensity does not change, we can say,

I(x,y,t) = I(x+dx, y+dy, t+dt)

Then take taylor series approximation of right-hand side, remove common terms and divide by dt to get the following equation:

f_x u + f_y v + f_t = 0 \;

where:

f_x = \frac{\partial f}{\partial x} \; ; \; f_y = \frac{\partial f}{\partial x} u = \frac{dx}{dt} \; ; \; v = \frac{dy}{dt}

Above equation is called Optical Flow equation. In it, we can find f_x and f_y, they are image gradients. Similarly f_t is the gradient along time. But (u,v) is unknown. We cannot solve this one equation with two unknown variables. So several methods are provided to solve this problem and one of them is Lucas-Kanade.

Lucas-Kanade method

We have seen an assumption before, that all the neighbouring pixels will have similar motion. Lucas-Kanade method takes a 3×3 patch around the point. So all the 9 points have the same motion. We can find (f_x, f_y, f_t) for these 9 points. So now our problem becomes solving 9 equations with two unknown variables which is over-determined. A better solution is obtained with least square fit method. Below is the final solution which is two equation-two unknown problem and solve to get the solution.

\begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} \sum_{i}{f_{x_i}}^2 & \sum_{i}{f_{x_i} f_{y_i} } \\ \sum_{i}{f_{x_i} f_{y_i}} & \sum_{i}{f_{y_i}}^2 \end{bmatrix}^{-1} \begin{bmatrix} - \sum_{i}{f_{x_i} f_{t_i}} \\ - \sum_{i}{f_{y_i} f_{t_i}} \end{bmatrix}

( Check similarity of inverse matrix with Harris corner detector. It denotes that corners are better points to be tracked.)

So from user point of view, idea is simple, we give some points to track, we receive the optical flow vectors of those points. But again there are some problems. Until now, we were dealing with small motions. So it fails when there is large motion. So again we go for pyramids. When we go up in the pyramid, small motions are removed and large motions becomes small motions. So applying Lucas-Kanade there, we get optical flow along with the scale.

 

是否略嫌不足啊??

何不就追本溯源矣☆

-----

KLT: An Implementation of theKanade-Lucas-Tomasi Feature Tracker

-----

KLT is an implementation, in the C programming language, of a feature tracker for the computer vision community.  The source code is in the public domain, available for both commercial and non-commerical use.

The tracker is based on the early work of Lucas and Kanade [1], was developed fully by Tomasi and Kanade [2], and was explained clearly in the paper by Shi and Tomasi [3]. Later, Tomasi proposed a slight modification which makes the computation symmetric with respect to the two images — the resulting equation is derived in the unpublished note by myself [4].  Briefly, good features are located by examining the minimum eigenvalue of each 2 by 2 gradient matrix, and features are tracked using a Newton-Raphson method of minimizing the difference between the two windows. Multiresolution tracking allows for relatively large displacements between images.  The affine computation that evaluates the consistency of features between non-consecutive frames [3] was implemented by Thorsten Thormaehlen several years after the original code and documentation were written.

Some Matlab interface routines:  klt_read_featuretable.m

Note:  An alternate Lucas-Kanade implementation can be found in Intel’s OpenCV library.  This implementation, described in the note by Bouguet, does a better job of handling features near the image borders, and it is more computationally efficient (approximately 30% on my desktop system).  However, it does not contain the affine consistency check.  Another alternative is GPU_KLT, which is an implementation of KLT for a graphics processing unit (GPU), which speeds up the run time considerably.  A Matlab implementation of a single template tracker is available at Lucas-Kanade 20 Years On. A Java implementation is available here.

References

[1] Bruce D. Lucas and Takeo Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. International Joint Conference on Artificial Intelligence, pages 674-679, 1981.
[2] Carlo Tomasi and Takeo Kanade. Detection and Tracking of Point Features. Carnegie Mellon University Technical Report CMU-CS-91-132, April 1991.
[3] Jianbo Shi and Carlo Tomasi. Good Features to Track. IEEE Conference on Computer Vision and Pattern Recognition, pages 593-600, 1994.
[4] Stan Birchfield. Derivation of Kanade-Lucas-Tomasi Tracking Equation. Unpublished, January 1997.

 

 

 

 

 

 

 

 

 

輕。鬆。學。部落客