【鼎革‧革鼎】︰ Raspbian Stretch 《三‧丁》

俗話說︰人心不足蛇吞象。用以表達過份貪婪。據維基百科詞條講 ,語源出自《三海經》之『巴蛇食象』︰

山海經校注·海內南經

  (山海經第十·山海經海經新釋卷五)

15、巴蛇食象,三歲而出其骨,君子服之,無心腹之疾①。其為蛇青黃赤黑②。一曰黑蛇青首③,在犀牛西。

①  郭璞云:“今南方(虫丹)蛇(藏經本作蟒蛇——珂)吞鹿,鹿已爛,自絞於樹腹中,骨皆穿鱗甲間出,此其類也。楚詞曰:‘有蛇吞象,厥大何如?’說者云長 千尋。”郝懿行云:“今楚詞天問作‘一蛇吞象’,與郭所引異。王逸注引此經作‘靈蛇吞象’,並與今本異也。”珂案:淮南子本經篇云:“羿斷修蛇於洞庭。” 路史後紀十以“修蛇”作“長它 ”,羅苹注云:“長它即所謂巴蛇,在江岳間。其墓今巴陵之巴丘,在州治側。江源記(即江記,六朝宋庾仲雍撰 ——珂)云:‘羿屠巴蛇於洞庭,其骨若陵,曰巴陵也。’”岳陽風土記(宋范致明撰)亦云:“今巴蛇□在州院廳側,巍然而高,草木叢翳。兼有巴蛇廟,在岳陽 門內 。”又云:“象骨山。山海經云:‘巴蛇吞象。’暴其骨於此。山旁湖謂之象骨港。”是均從此經及淮南子附會而生出之神話。然而既有冢有廟,有山有港,言之確 鑿,則知傳播於民間亦已久矣。

② 珂案:言其文采斑爛也。

③  珂案:海內經云:“有巴遂山,澠水出焉。又有朱卷之國。有黑蛇,青首,食象。”即此。巴,小篆作□,說文十四云:“蟲也;或曰 :食象蛇。象 形。”則所象者,物在蛇腹彭亨之形。山海經多稱大蛇 ,如北山經云:“大咸之山,有蛇名曰長蛇,其毛如彘毫,其音如鼓柝。”北次三經云:“錞於毋逢之山,是 有大蛇,赤首白身,其音如牛,見則其邑大旱。”是可以“吞象”矣。水經注葉榆河云:“山多大蛇 ,名曰髯蛇,長十丈,圍七八尺,常在樹上伺鹿獸,鹿獸過,便低頭繞之。有頃鹿死,先濡令濕訖,便吞,頭角骨皆鑽皮出。山夷始見蛇不動時,便以大竹籤籤蛇頭 至尾,殺而食之,以為珍異。”即郭注所謂(虫丹)蛇也。

───

然而從《山海經》的說法『君子服之,無心腹之疾。』來看,一點也沒有貪婪的意思吧!郭璞認為『巴蛇』是『蟒蛇』一類。無怪乎『TensorFlow』能在二十行內完成『手寫阿拉伯數字』辨識程式︰

MNIST For ML Beginners

This tutorial is intended for readers who are new to both machine learning and TensorFlow. If you already know what MNIST is, and what softmax (multinomial logistic) regression is, you might prefer this faster paced tutorial. Be sure to install TensorFlow before starting either tutorial.

When one learns how to program, there’s a tradition that the first thing you do is print “Hello World.” Just like programming has Hello World, machine learning has MNIST.

MNIST is a simple computer vision dataset. It consists of images of handwritten digits like these:

It also includes labels for each image, telling us which digit it is. For example, the labels for the above images are 5, 0, 4, and 1.

In this tutorial, we’re going to train a model to look at images and predict what digits they are. Our goal isn’t to train a really elaborate model that achieves state-of-the-art performance — although we’ll give you code to do that later! — but rather to dip a toe into using TensorFlow. As such, we’re going to start with a very simple model, called a Softmax Regression.

The actual code for this tutorial is very short, and all the interesting stuff happens in just three lines. However, it is very important to understand the ideas behind it: both how TensorFlow works and the core machine learning concepts. Because of this, we are going to very carefully work through the code.

…… 《W!o+ 的《小伶鼬工坊演義》︰巴蛇食象

 

樹莓派 cpu 力小,若沒有 gpu 輔助,恐怕無法在張量流 Tensorflow 裡衝浪也。即使用 3B 目前還是喝茶好,也別勉強喝咖啡吧!?

認識一下『Caffe』是何物︰

Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.

Check out our web image classification demo!

Why Caffe?

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.

Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning. We believe that Caffe is the fastest convnet implementation available.

Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.

* With the ILSVRC2012-winning SuperVision model and caching IO. Consult performance details.

Documentation

─── 摘自《W!o+ 的《小伶鼬工坊演義》︰神經網絡【FFT】七

 

那麼 OpenCV 呢?!祇看下面

pi@raspberrypi:~ apt-cache show python3-opencv N: 找不到套件 python3-opencv E: 未找到套件 pi@raspberrypi:~
pi@raspberrypi:~ apt-cache show python-opencv Package: python-opencv Source: opencv Version: 2.4.9.1+dfsg1-2 Architecture: armhf Maintainer: Debian Science Team <debian-science-maintainers@lists.alioth.debian.org> </pre>    <span style="color: #666699;">的輸出結果,可推知作者當然會嚐鮮矣!!</span> <pre class="lang:default decode:true ">pi@raspberrypi:~ ipython3
Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import cv2

In [2]: print (cv2.__version__)
3.3.0-dev

In [3]:

※ 註︰

‧ 編譯參考 《GoPiGo 小汽車︰朝向目標前進《二》

‧ 概需 32G SD 卡。

‧ 1G swap 空間

pi@raspberrypi:~ more /etc/dphys-swapfile  # /etc/dphys-swapfile - user settings for dphys-swapfile package # author Neil Franklin, last modification 2010.05.05 # copyright ETH Zuerich Physics Departement #   use under either modified/non-advertising BSD or GPL license  # this file is sourced with . so full normal sh syntax applies  # the default settings are added as commented out CONF_*=* lines   # where we want the swapfile to be, this is the default #CONF_SWAPFILE=/var/swap  # set size to absolute value, leaving empty (default) then uses computed value #   you most likely don't want this, unless you have an special disk situation CONF_SWAPSIZE=1000  # set size to computed value, this times RAM size, dynamically adapts, #   guarantees that there is enough swap without wasting disk space on excess #CONF_SWAPFACTOR=2  # restrict size (computed and absolute!) to maximally this limit #   can be set to empty for no limit, but beware of filled partitions! #   this is/was a (outdated?) 32bit kernel limit (in MBytes), do not overrun it #   but is also sensible on 64bit to prevent filling /var or even / partition #CONF_MAXSWAP=2048 pi@raspberrypi:~ 

 

將要如何驗證哩??心想與其老彈臉部辨識

Face Recognition with OpenCV

 

之陳調,不如就轉拍貼近生活的新照呦◎

高動態範圍成像

High Dynamic Range (HDR)

Goal

In this chapter, we will

  • Learn how to generate and display HDR image from an exposure sequence.
  • Use exposure fusion to merge an exposure sequence.

Theory

High-dynamic-range imaging (HDRI or HDR) is a technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. While the human eye can adjust to a wide range of light conditions, most imaging devices use 8-bits per channel, so we are limited to only 256 levels. When we take photographs of a real world scene, bright regions may be overexposed, while the dark ones may be underexposed, so we can’t capture all details using a single exposure. HDR imaging works with images that use more than 8 bits per channel (usually 32-bit float values), allowing much wider dynamic range.

There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine these exposures it is useful to know your camera’s response function and there are algorithms to estimate it. After the HDR image has been merged, it has to be converted back to 8-bit to view it on usual displays. This process is called tonemapping. Additional complexities arise when objects of the scene or camera move between shots, since images with different exposures should be registered and aligned.

In this tutorial we show 2 algorithms (Debvec, Robertson) to generate and display HDR image from an exposure sequence, and demonstrate an alternative approach called exposure fusion (Mertens), that produces low dynamic range image and does not need the exposure times data. Furthermore, we estimate the camera response function (CRF) which is of great value for many computer vision algorithms. Each step of HDR pipeline can be implemented using different algorithms and parameters, so take a look at the reference manual to see them all.

Exposure sequence HDR

In this tutorial we will look on the following scene, where we have 4 exposure images, with exposure times of: 15, 2.5, 1/4 and 1/30 seconds. (You can download the images from Wikipedia)

exposures.jpg

 

pi@raspberrypi:~/hdr $ ipython3
Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import cv2

In [2]: import numpy as np

In [3]: img_fn = ["img0.jpg", "img1.jpg", "img2.jpg", "img3.jpg"]

In [4]: img_list = [cv2.imread(fn) for fn in img_fn]

In [5]: exposure_times = np.array([15.0, 2.5, 0.25, 0.0333], dtype=np.float32)

In [6]: merge_debvec = cv2.createMergeDebevec()

In [7]: hdr_debvec = merge_debvec.process(img_list, times=exposure_times.copy())
   ...: 

In [8]: merge_robertson = cv2.createMergeRobertson()

In [9]: hdr_robertson = merge_robertson.process(img_list, times=exposure_times.c
   ...: opy())

In [10]: tonemap1 = cv2.createTonemapDurand(gamma=2.2)

In [11]: res_debvec = tonemap1.process(hdr_debvec.copy())

In [12]: tonemap2 = cv2.createTonemapDurand(gamma=1.3)

In [13]: res_robertson = tonemap2.process(hdr_robertson.copy())

In [14]: merge_mertens = cv2.createMergeMertens()

In [15]: res_mertens = merge_mertens.process(img_list)

In [16]: res_debvec_8bit = np.clip(res_debvec*255, 0, 255).astype('uint8')

In [17]: res_robertson_8bit = np.clip(res_robertson*255, 0, 255).astype('uint8')
    ...: 

In [18]: res_mertens_8bit = np.clip(res_mertens*255, 0, 255).astype('uint8')

In [19]: cv2.imwrite("ldr_debvec.jpg", res_debvec_8bit)
Out[19]: True

In [20]: cv2.imwrite("ldr_robertson.jpg", res_robertson_8bit)
Out[20]: True

In [21]: cv2.imwrite("fusion_mertens.jpg", res_mertens_8bit)
Out[21]: True

In [22]: cal_debvec = cv2.createCalibrateDebevec()

In [23]: crf_debvec = cal_debvec.process(img_list, times=exposure_times)

In [24]: hdr_debvec = merge_debvec.process(img_list, times=exposure_times.copy()
    ...: , response=crf_debvec.copy())

In [25]: cal_robertson = cv2.createCalibrateRobertson()

In [26]: crf_robertson = cal_robertson.process(img_list, times=exposure_times)

In [27]: hdr_robertson = merge_robertson.process(img_list, times=exposure_times.
    ...: copy(), response=crf_robertson.copy())

In [28]:

 

ldr_debvec.jpg

 

ldr_robertson.jpg

 

fusion_mertens.jpg