Rock It 《Armbian》十

在結束 Armbian 篇章前,略為說說 ROCK64 GPIO ,如是方能完整介紹這個開發平台吧。

藉著匯流排說明文件︰

ROCK64 Pi-2 and Pi-P5+ Bus

 

 

可知其設計盡量依循樹莓派的 GPIO 慣例也。

這可從 RPi.GPIO 程式庫之改寫看的更清楚︰

/Rock64-R64.GPIO

Python GPIO library for the Rock64 SBC (RPi.GPIO clone)

Rock64-R64.GPIO

A Python GPIO library for the Rock64 single-board computer (RPi.GPIO clone).

Python Libraries and Scripts

R64.GPIO
A re-implementation of the RPi.GPIO library for the Rock64. Currently under development.
See the wiki for documentation on Functions and GPIO Modes.

R64-GPIO-test.py
A simple test script. Outputs a list of internal vars, sets the GPIO mode to “BOARD”, sets up a GPIO output (blinks an LED if connected to pin 16), sets up a GPIO input (pulls-up and reports the state of pin 18), then cleans up all GPIO exports and exits.

Library Installation and Usage:

Importing R64.GPIO
Below is the reccomended method for importing this library into your project. For alternate methods, see the Installation and Usagepage in the wiki.

  1. Download the entire “R64” folder from the repo.
  2. Place the “R64” folder in the same directory as the Python script you’re working with.
  3. Within your script, substitute the traditional “import RPi.GPIO as GPIO” line for “import R64.GPIO as GPIO“.

Once imported, syntax for implemented functions should be identical to RPi.GPIO.

 

※ 參考︰

rock64@rock64:~/Rock64-R64.GPIO$ sudo python3 R64-GPIO-test.py 
[sudo] password for rock64: 
Testing R64.GPIO Module...

Module Variables:
Name           Value
----           -----
GPIO.ROCK      ROCK
GPIO.BOARD     BOARD
GPIO.BCM       BCM
GPIO.OUT       out
GPIO.IN        in
GPIO.HIGH      1
GPIO.LOW       0
GPIO.PUD_UP    0
GPIO.PUD_DOWN  1
GPIO.VERSION   0.6.3
GPIO.RPI_INFO  {'TYPE': 'Pi 3 Model B', 'MANUFACTURER': 'Embest', 'RAM': '1024M', 'REVISION': 'a22082', 'PROCESSOR': 'BCM2837', 'P1_REVISION': 3}

Testing GPIO Input/Output:
Output State : 1
Input State  : 1

Waiting 3 seconds for interrupt...
Timeout!

Testing PWM Output - DutyCycle - High Precision:
60Hz at 50% duty cycle for 1 second
60Hz at 25% duty cycle for 1 second
60Hz at 10% duty cycle for 1 second
60Hz at  1% duty cycle for 1 second

Testing PWM Output - DutyCycle - Low Precision:
60Hz at 50% duty cycle for 1 second
60Hz at 25% duty cycle for 1 second
60Hz at 10% duty cycle for 1 second
60Hz at  1% duty cycle for 1 second

Testing PWM Output - Frequency - Low Precision:
60Hz at 50% duty cycle for 1 second
30Hz at 50% duty cycle for 1 second
20Hz at 50% duty cycle for 1 second
10Hz at 50% duty cycle for 1 second

Test Complete

 

有興趣應用者,最好先讀讀

Rock64 single-board computer

This is a cousin of the Pine A64 board; it has made by the same people, and like the Pine A64, it has a 4-core, 64-bit processor. There are 3 variants available, with 1, 2 or 4 GB of memory. I have the ones with 4 GB, identified as ROCK64_V2.0 2017-0713 written on the circuit board, right above the location of the Rockchip RK3328.

This is where OS-developments are happening: https://github.com/ayufan-rock64/linux-build/releases. Kernel 4.4.70 is what is being used here at present.

There is quite a lot of development going on, so this info may change frequently as new versions come into existence.

 

文本,有個好開始呦☆

 

 

 

 

 

 

 

Rock It 《Armbian》九‧三

故而只因『小巧完整』,宣說 Michael Nielsen 的

Neural Networks and Deep Learning 文本︰

Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don’t tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.

Automatically learning from data sounds promising. However, until 2006 we didn’t know how to train neural networks to surpass more traditional approaches, except for a few specialized problems. What changed in 2006 was the discovery of techniques for learning in so-called deep neural networks. These techniques are now known as deep learning. They’ve been developed further, and today deep neural networks and deep learning achieve outstanding performance on many important problems in computer vision, speech recognition, and natural language processing. They’re being deployed on a large scale by companies such as Google, Microsoft, and Facebook.

The purpose of this book is to help you master the core concepts of neural networks, including modern techniques for deep learning. After working through the book you will have written code that uses neural networks and deep learning to solve complex pattern recognition problems. And you will have a foundation to use neural networks and deep learning to attack problems of your own devising.

……

It’s rare for a book to aim to be both principle-oriented and hands-on. But I believe you’ll learn best if we build out the fundamental ideas of neural networks. We’ll develop living code, not just abstract theory, code which you can explore and extend. This way you’ll understand the fundamentals, both in theory and practice, and be well set to add further to your knowledge.

且留大部頭之『未出版』大作

Deep Learning

An MIT Press book in preparation

Ian Goodfellow, Yoshua Bengio and Aaron Courville

The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The book will be available for sale soon, and will remain available online for free.

Citing the book in preparation

To cite this book in preparation, please use this bibtex entry:

@unpublished{Goodfellow-et-al-2016-Book,
    title={Deep Learning},
    author={Ian Goodfellow, Yoshua Bengio, and Aaron Courville},
    note={Book in preparation for MIT Press},
    url={http://www.deeplearningbook.org},
    year={2016}
}

───

于有興趣者自享的哩??!!

─《W!O+ 的《小伶鼬工坊演義》︰神經網絡與深度學習【引言】

 

一時為 Adam Geitgey 之簡短『人臉偵測』筆記所吸引︰

 

欲知其『人臉辨識』結果如何也︰

rock64@rock64:~/face_recognition/examplespython3 facerec_from_video_file.py Writing frame 1 / 2356 Writing frame 2 / 2356 Writing frame 3 / 2356 Writing frame 4 / 2356 Writing frame 5 / 2356 Writing frame 6 / 2356 ...</pre>   <pre class="lang:default decode:true">rock64@rock64:~/face_recognition/examples ffplay output.avi 
ffplay version 3.2.12-1~deb9u1 Copyright (c) 2003-2018 the FFmpeg developers
  built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
  configuration: --prefix=/usr --extra-version='1~deb9u1' --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 34.101 / 55. 34.101
  libavcodec     57. 64.101 / 57. 64.101
  libavformat    57. 56.101 / 57. 56.101
  libavdevice    57.  1.100 / 57.  1.100
  libavfilter     6. 65.100 /  6. 65.100
  libavresample   3.  1.  0 /  3.  1.  0
  libswscale      4.  2.100 /  4.  2.100
  libswresample   2.  3.100 /  2.  3.100
  libpostproc    54.  1.100 / 54.  1.100
libGL error: failed to authenticate magic 1
libGL error: failed to load driver: i965
Input #0, avi, from 'output.avi':  0KB vq=    0KB sq=    0B f=0/0   
  Metadata:
    encoder         : Lavf57.56.101
  Duration: 00:01:18.61, start: 0.000000, bitrate: 1476 kb/s
    Stream #0:0: Video: mpeg4 (Simple Profile) (XVID / 0x44495658), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 1471 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn, 2997 tbc
  21.16 M-V:  0.134 fd= 468 aq=    0KB vq=  217KB sq=    0B f=0/0

 

 

想來『人工智慧』終現眼前呦☆

於是跟著索引︰

Articles and Guides that cover face_recognition

How Face Recognition Works

If you want to learn how face location and recognition work instead of depending on a black box library, read my article.

Caveats

  • The face recognition model is trained on adults and does not work very well on children. It tends to mix up children quite easy using the default comparison threshold of 0.6.
  • Accuracy may vary between ethnic groups. Please see this wiki page for more details.

 

享受了趟『知性之旅』哩☺

Machine Learning is Fun!

The world’s easiest introduction to Machine Learning

Update: This article is part of a series. Check out the full series: Part 1, Part 2,Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8! You can also read this article in日本語, Português, Português (alternate), Türkçe, Français, 한국어 , العَرَبِيَّة‎‎,Español (México), Español (España), Polski, Italiano, 普通话, Русский, 한국어 ,Tiếng Việt or فارسی.

Giant update: I’ve written a new book based on these articles! It not only expands and updates all my articles, but it has tons of brand new content and lots of hands-on coding projects. Check it out now!

Have you heard people talking about machine learning but only have a fuzzy idea of what that means? Are you tired of nodding your way through conversations with co-workers? Let’s change that!


This guide is for anyone who is curious about machine learning but has no idea where to start. I imagine there are a lot of people who tried reading the wikipedia article, got frustrated and gave up wishing someone would just give them a high-level explanation. That’s what this is.

The goal is be accessible to anyone — which means that there’s a lot of generalizations. But who cares? If this gets anyone more interested in ML, then mission accomplished.

 

 

 

 

 

 

 

 

Rock It 《Armbian》九‧二

見著人臉辨識範例中有一個『基準』測試程式

face_recognition/examples/benchmark.py

,何不就看看 ROCK64 表現如何哩︰

rock64@rock64:~/face_recognition/examplespython3 benchmark.py  Benchmarks (Note: All benchmarks are only using a single CPU core)  Timings at 240p:  - Face locations: 0.5073s (1.97 fps)  - Face landmarks: 0.0152s (65.92 fps)  - Encode face (inc. landmarks): 1.1598s (0.86 fps)  - End-to-end: 1.6672s (0.60 fps)  Timings at 480p:  - Face locations: 1.9778s (0.51 fps)  - Face landmarks: 0.0156s (63.96 fps)  - Encode face (inc. landmarks): 1.1604s (0.86 fps)  - End-to-end: 3.1380s (0.32 fps)  Timings at 720p:  - Face locations: 4.5431s (0.22 fps)  - Face landmarks: 0.0159s (63.02 fps)  - Encode face (inc. landmarks): 1.1630s (0.86 fps)  - End-to-end: 5.7056s (0.18 fps)  Timings at 1080p:  - Face locations: 10.4224s (0.10 fps)  - Face landmarks: 0.0161s (62.19 fps)  - Encode face (inc. landmarks): 1.1640s (0.86 fps)  - End-to-end: 11.5861s (0.09 fps) </pre>    <span style="color: #666699;">心想為何只用一顆 CPU 呢?</span>  <span style="color: #666699;">猶記命令列工具 face_recognition </span> <pre class="lang:default decode:true">rock64@rock64:~ face_recognition --help
Usage: face_recognition [OPTIONS] KNOWN_PEOPLE_FOLDER IMAGE_TO_CHECK

Options:
  --cpus INTEGER           number of CPU cores to use in parallel (can speed
                           up processing lots of images). -1 means "use all in
                           system"
  --tolerance FLOAT        Tolerance for face comparisons. Default is 0.6.
                           Lower this if you get multiple matches for the same
                           person.
  --show-distance BOOLEAN  Output face distance. Useful for tweaking tolerance
                           setting.
  --help                   Show this message and exit.

 

不是可以給定幾顆 CPU 參數嗎?

※  註︰

Usage

Command-Line Interface

When you install face_recognition, you get two simple command-line programs:

  • face_recognition – Recognize faces in a photograph or folder full for photographs.
  • face_detection – Find faces in a photograph or folder full for photographs.

face_recognition command line tool

The face_recognition command lets you recognize faces in a photograph or folder full for photographs.

First, you need to provide a folder with one picture of each person you already know. There should be one image file for each person with the files named according to who is in the picture:

known

Next, you need a second folder with the files you want to identify:

unknown

Then in you simply run the command face_recognition, passing in the folder of known people and the folder (or single image) with unknown people and it tells you who is in each image:

face_recognition ./pictures_of_people_i_know/ ./unknown_pictures/  /unknown_pictures/unknown.jpg,Barack Obama /face_recognition_test/unknown_pictures/unknown.jpg,unknown_person</pre> <span style="color: #808080;">……</span> <h5><span style="color: #808080;">Speeding up Face Recognition</span></h5> <span style="color: #808080;">Face recognition can be done in parallel if you have a computer with multiple CPU cores. For example, if your system has 4 CPU cores, you can process about 4 times as many images in the same amount of time by using all your CPU cores in parallel.</span>  <span style="color: #808080;">If you are using Python 3.4 or newer, pass in a <code>--cpus <number_of_cpu_cores_to_use></code> parameter:</span> <pre class="lang:default decode:true "> face_recognition --cpus 4 ./pictures_of_people_i_know/ ./unknown_pictures/

───

 

閱讀 API 文件後,確定沒有也!

Python Module

You can import the face_recognition module and then easily manipulate
faces with just a couple of lines of code. It’s super easy!

API Docs: https://face-recognition.readthedocs.io.

 

故而只能考察原始碼呦?

face_recognition_cli.py

def process_images_in_process_pool(images_to_check, known_names, known_face_encodings, number_of_cpus, tolerance, show_distance):
    if number_of_cpus == -1:
        processes = None
    else:
        processes = number_of_cpus

    # macOS will crash due to a bug in libdispatch if you don't use 'forkserver'
    context = multiprocessing
    if "forkserver" in multiprocessing.get_all_start_methods():
        context = multiprocessing.get_context("forkserver")

    pool = context.Pool(processes=processes)

    function_parameters = zip(
        images_to_check,
        itertools.repeat(known_names),
        itertools.repeat(known_face_encodings),
        itertools.repeat(tolerance),
        itertools.repeat(show_distance)
    )

    pool.starmap(test_image, function_parameters)

 

原來那用的是派生三

17.2. multiprocessing — Process-based parallelism

17.2.1. Introduction

multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.

The multiprocessing module also introduces APIs which do not have analogs in the threading module. A prime example of this is the Pool object which offers a convenient means of parallelizing the execution of a function across multiple input values, distributing the input data across processes (data parallelism). The following example demonstrates the common practice of defining such functions in a module so that child processes can successfully import that module. This basic example of data parallelism using Pool,

rock64@rock64:~$ python3
Python 3.5.3 (default, Sep 27 2018, 17:25:39) 
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from multiprocessing import Pool
>>> def f(x):
...     return x*x
... 
>>> if __name__ == '__main__':
...     with Pool(4) as p:
...         print(p.map(f, [1, 2, 3]))
... 
[1, 4, 9]
>>>

will print to standard output

[1, 4, 9]

……

17.2.1.2. Contexts and start methods

Depending on the platform, multiprocessing supports three ways to start a process. These start methods are

spawn

The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver.

Available on Unix and Windows. The default on Windows.

fork

The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic.

Available on Unix only. The default on Unix.

forkserver

When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to useos.fork(). No unnecessary resources are inherited.

Available on Unix platforms which support passing file descriptors over Unix pipes.

Changed in version 3.4: spawn added on all unix platforms, and forkserver added for some unix platforms. Child processes no longer inherit all of the parents inheritable handles on Windows.

On Unix using the spawn or forkserver start methods will also start a semaphore tracker process which tracks the unlinked named semaphores created by processes of the program. When all processes have exited the semaphore tracker unlinks any remaining semaphores. Usually there should be none, but if a process was killed by a signal there may some “leaked” semaphores. (Unlinking the named semaphores is a serious matter since the system allows only a limited number, and they will not be automatically unlinked until the next reboot.)

 

程式庫呀!

 

 

 

 

 

 

 

Rock It 《Armbian》九‧一

透過範例學習一個程式庫,是傳統經典之方式也!

Help/Info

 

但時間一長恐覺索然無趣,或可輔之以自覺有趣的議題,比方說︰

如何『數位化妝』呢?

Find and manipulate facial features in pictures

Get the locations and outlines of each person’s eyes, nose, mouth and chin.

rock64@rock64:~/face_recognition/examples$ python3
Python 3.5.3 (default, Sep 27 2018, 17:25:39) 
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import face_recognition
>>> image = face_recognition.load_image_file("biden.jpg")
>>> face_landmarks_list = face_recognition.face_landmarks(image)
>>> face_landmarks_list
[{'right_eye': [(629, 348), (647, 342), (661, 346), (672, 357), (659, 358), (644, 354)], 'nose_bridge': [(601, 328), (599, 352), (598, 375), (596, 400)], 'nose_tip': [(555, 414), (570, 421), (586, 428), (601, 428), (614, 426)], 'left_eyebrow': [(488, 294), (509, 279), (535, 278), (561, 283), (584, 296)], 'left_eye': [(512, 320), (528, 316), (544, 319), (557, 331), (541, 330), (525, 327)], 'top_lip': [(519, 459), (545, 455), (566, 456), (580, 462), (595, 462), (610, 470), (627, 480), (620, 477), (593, 470), (579, 468), (564, 463), (527, 459)], 'right_eyebrow': [(622, 307), (646, 305), (670, 309), (691, 321), (698, 344)], 'bottom_lip': [(627, 480), (606, 482), (589, 479), (575, 477), (560, 473), (540, 468), (519, 459), (527, 459), (563, 461), (577, 466), (592, 468), (620, 477)], 'chin': [(429, 328), (426, 368), (424, 408), (425, 447), (437, 484), (460, 515), (490, 538), (524, 556), (562, 564), (600, 566), (630, 554), (655, 533), (672, 507), (684, 476), (694, 445), (702, 413), (707, 382)]}]
>>>

 

Finding facial features is super useful for lots of important stuff. But you can also use it for really stupid stuff like applying digital make-up (think ‘Meitu’):

 

digital_makeup.py

from PIL import Image, ImageDraw
import face_recognition

# Load the jpg file into a numpy array
image = face_recognition.load_image_file("biden.jpg")

# Find all facial features in all the faces in the image
face_landmarks_list = face_recognition.face_landmarks(image)

for face_landmarks in face_landmarks_list:
    pil_image = Image.fromarray(image)
    d = ImageDraw.Draw(pil_image, 'RGBA')

    # Make the eyebrows into a nightmare
    d.polygon(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 128))
    d.polygon(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 128))
    d.line(face_landmarks['left_eyebrow'], fill=(68, 54, 39, 150), width=5)
    d.line(face_landmarks['right_eyebrow'], fill=(68, 54, 39, 150), width=5)

    # Gloss the lips
    d.polygon(face_landmarks['top_lip'], fill=(150, 0, 0, 128))
    d.polygon(face_landmarks['bottom_lip'], fill=(150, 0, 0, 128))
    d.line(face_landmarks['top_lip'], fill=(150, 0, 0, 64), width=8)
    d.line(face_landmarks['bottom_lip'], fill=(150, 0, 0, 64), width=8)

    # Sparkle the eyes
    d.polygon(face_landmarks['left_eye'], fill=(255, 255, 255, 30))
    d.polygon(face_landmarks['right_eye'], fill=(255, 255, 255, 30))

    # Apply some eyeliner
    d.line(face_landmarks['left_eye'] + [face_landmarks['left_eye'][0]], fill=(0, 0, 0, 110), width=6)
    d.line(face_landmarks['right_eye'] + [face_landmarks['right_eye'][0]], fill=(0, 0, 0, 110), width=6)

    pil_image.show()

 

     

 

借著了解他人『專案』之想法心思︰

/face_recognition

The world’s simplest facial recognition api for Python and the command line

Face Recognition

You can also read a translated version of this file in Chinese 简体中文版.

Recognize and manipulate faces from Python or from the command line with the world’s simplest face recognition library.

Built using dlib‘s state-of-the-art face recognition built with deep learning. The model has an accuracy of 99.38% on the Labeled Faces in the Wild benchmark.

This also provides a simple face_recognition command line tool that lets you do face recognition on a folder of images from the command line!

 

然後深入其『原始碼』架構邏輯︰

face_recognition/face_recognition/api.py

# -*- coding: utf-8 -*-

import PIL.Image
import dlib
import numpy as np

try:
    import face_recognition_models
except Exception:
    print("Please install `face_recognition_models` with this command before using `face_recognition`:\n")
    print("pip install git+https://github.com/ageitgey/face_recognition_models")
    quit()

face_detector = dlib.get_frontal_face_detector()

predictor_68_point_model = face_recognition_models.pose_predictor_model_location()
pose_predictor_68_point = dlib.shape_predictor(predictor_68_point_model)

predictor_5_point_model = face_recognition_models.pose_predictor_five_point_model_location()
pose_predictor_5_point = dlib.shape_predictor(predictor_5_point_model)

cnn_face_detection_model = face_recognition_models.cnn_face_detector_model_location()
cnn_face_detector = dlib.cnn_face_detection_model_v1(cnn_face_detection_model)

face_recognition_model = face_recognition_models.face_recognition_model_location()
face_encoder = dlib.face_recognition_model_v1(face_recognition_model)
...

 

將能更快樂的學習乎☆

 

 

 

 

 

 

 

Rock It 《Armbian》八

Block or report user

 

不知何許人也?

願把『完整精確』文件和『高品質驗證』可攜碼,當作主要特色者實不多見呦!

Major Features

  • Documentation
    • Unlike a lot of open source projects, this one provides complete and precise documentation for every class and function. There are also debugging modes that check the documented preconditions for functions. When this is enabled it will catch the vast majority of bugs caused by calling functions incorrectly or using objects in an incorrect manner.
    • Lots of example programs are provided
    • I consider the documentation to be the most important part of the library. So if you find anything that isn’t documented, isn’t clear, or has out of date documentation, tell me and I will fix it.
  • High Quality Portable Code
    • Good unit test coverage. The ratio of unit test lines of code to library lines of code is about 1 to 4.
    • The library is tested regularly on MS Windows, Linux, and Mac OS X systems. However, it should work on any POSIX system and has been used on Solaris, HPUX, and the BSDs.
    • No other packages are required to use the library. Only APIs that are provided by an out of the box OS are needed.
    • There is no installation or configure step needed before you can use the library. See the How to compile page for details.
    • All operating system specific code is isolated inside the OS abstraction layers which are kept as small as possible. The rest of the library is either layered on top of the OS abstraction layers or is pure ISO standard C++.

 

如今一十六年已過︰

Overview

Dlib is a general purpose cross-platform open source software library written in the C++ programming language. Its design is heavily influenced by ideas from design by contract and component-based software engineering. This means it is, first and foremost, a collection of independent software components, each accompanied by extensive documentation and thorough debugging modes.

Davis King has been the primary author of dlib since development began in 2002. In that time dlib has grown to include a wide variety of tools. In particular, it now contains software components for dealing with networking, threads, graphical interfaces, complex data structures, linear algebra, statistical machine learning, image processing, data mining, XML and text parsing, numerical optimization, Bayesian networks, and numerous other tasks. In recent years, much of the development has been focused on creating a broad set of statistical machine learning tools. However, dlib remains a general purpose library andwelcomes contributions of high quality software components useful in any domain.

Core to the development philosophy of dlib is a dedication to portability and ease of use. Therefore, all code in dlib is designed to be as portable as possible and similarly to not require a user to configure or install anything. To help achieve this, all platform specific code is confined inside the API wrappers. Everything else is either layered on top of those wrappers or is written in pure ISO standard C++. Currently the library is known to work on OS X, MS Windows, Linux, Solaris, the BSDs, and HP-UX. It should work on any POSIX platform but I haven’t had the opportunity to test it on any others (if you have access to other platforms and would like to help increase this list then let me know).

The rest of this page explains everything you need to know to get started using the library. It explains where to find the documentation for each object/function and how to interpret what you find there. For help compiling with dlib check out the how to compile page. Or if you are having trouble finding where a particular object’s documentation is located you may be able to find it by consulting the index.

The library is also covered by the very liberal Boost Software License so feel free to use it any way you like. However, if you use dlib in your research then please cite its Journal of Machine Learning Research paper when publishing.

Finally, I must give some credit to the Reusable Software Research Group at Ohio State since they taught me much of the software engineering techniques used in the creation of this library.

 

想必寶劍鋒利矣☆

/dlib

A toolkit for making real world machine learning and data analysis applications in C++ http://dlib.net

dlib C++ library

Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. See http://dlib.net for the main project documentation and API reference.

因此樂得捨近求遠

rock64@rock64:~sudo pip3 install dlib [sudo] password for rock64:  Collecting dlib   Downloading https://files.pythonhosted.org/packages/35/8d/e4ddf60452e2fb1ce3164f774e68968b3f110f1cb4cd353235d56875799e/dlib-19.16.0.tar.gz (3.3MB)     100% |████████████████████████████████| 3.3MB 100kB/s  Building wheels for collected packages: dlib   Running setup.py bdist_wheel for dlib ... done   Stored in directory: /root/.cache/pip/wheels/ce/f9/bc/1c51cd0b40a2b5dfd46ab79a73832b41e7c3aa918a508154f0 Successfully built dlib Installing collected packages: dlib Successfully installed dlib-19.16.0 </pre> <pre class="lang:default decode:true">rock64@rock64:~ python3
Python 3.5.3 (default, Sep 27 2018, 17:25:39) 
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dlib
>>> dlib.__version__
'19.16.0'
>>>

 

享受自己編譯樂趣哩☺

sudo pip3 install scikit-build

git clone https://github.com/davisking/dlib
cd dlib/
mkdir build
cd build
cmake ..
cmake --build .
cd ..
sudo python3 setup.py install

sudo pip3 install imutils
rock64@rock64:~python3 Python 3.5.3 (default, Sep 27 2018, 17:25:39)  [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import dlib >>> dlib.__version__ '19.16.99' >>></pre>    <span style="color: #666699;">淺嚐一口,如何能相信</span>  <span style="color: #666699;"><a style="color: #666699;" href="http://dlib.net/face_detector.py.html">face_detector.py</a></span> <pre class="lang:default decode:true ">#!/usr/bin/python # The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt # #   This example program shows how to find frontal human faces in an image.  In #   particular, it shows how you can take a list of images from the command #   line and display each on the screen with red boxes overlaid on each human #   face. # #   The examples/faces folder contains some jpg images of people.  You can run #   this program on them and see the detections by executing the #   following command: #       ./face_detector.py ../examples/faces/*.jpg # #   This face detector is made using the now classic Histogram of Oriented #   Gradients (HOG) feature combined with a linear classifier, an image #   pyramid, and sliding window detection scheme.  This type of object detector #   is fairly general and capable of detecting many types of semi-rigid objects #   in addition to human faces.  Therefore, if you are interested in making #   your own object detectors then read the train_object_detector.py example #   program.   # # # COMPILING/INSTALLING THE DLIB PYTHON INTERFACE #   You can install dlib using the command: #       pip install dlib # #   Alternatively, if you want to compile dlib yourself then go into the dlib #   root folder and run: #       python setup.py install # #   Compiling dlib should work on any operating system so long as you have #   CMake installed.  On Ubuntu, this can be done easily by running the #   command: #       sudo apt-get install cmake # #   Also note that this example requires Numpy which can be installed #   via the command: #       pip install numpy</pre>    <span style="color: #666699;">短短幾行程式,就可框出『<a style="color: #666699;" href="https://zh.wikipedia.org/zh-tw/%E9%84%A7%E9%BA%97%E5%90%9B">鄧麗君</a>』之臉龐呀☀</span> <pre class="lang:default decode:true ">rock64@rock64:~/test python3 face_detector.py 260px-鄧麗君Teresa_Teng.jpg 
Processing file: 260px-鄧麗君(Teresa_Teng).jpg
Number of faces detected: 1
Detection 0: Left: 125 Top: 22 Right: 161 Bottom: 58
Hit enter to continue