Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 112

Notice: Trying to access array offset on value of type bool in /home1/freesand/public_html/wp-content/plugins/wiki-embed/WikiEmbed.php on line 116
6 月 | 2017 | FreeSandal | 第 6 頁

GoPiGo 小汽車︰朝向目標前進《三》

從一個概念── 背景去除 ──

Background subtraction

Background subtraction, also known as foreground detection, is a technique in the fields of image processing and computer vision wherein an image’s foreground is extracted for further processing (object recognition etc.). Generally an image’s regions of interest are objects (humans, cars, text etc.) in its foreground. After the stage of image preprocessing (which may include image denoising, post processing like morphology etc.) object localisation is required which may make use of this technique.

Background subtraction is a widely used approach for detecting moving objects in videos from static cameras. The rationale in the approach is that of detecting the moving objects from the difference between the current frame and a reference frame, often called “background image”, or “background model”. Background subtraction is mostly done if the image in question is a part of a video stream. Background subtraction provides important cues for numerous applications in computer vision, for example surveillance tracking or human poses estimation.

Background subtraction is generally based on a static background hypothesis which is often not applicable in real environments. With indoor scenes, reflections or animated images on screens lead to background changes. Similarly, due to wind, rain or illumination changes brought by weather, static backgrounds methods have difficulties with outdoor scenes. [1]

Conventional Approaches

A robust background subtraction algorithm should be able to handle lighting changes, repetitive motions from clutter and long-term scene changes.[2] The following analyses make use of the function of V(x,y,t) as a video sequence where t is the time dimension, x and y are the pixel location variables. e.g. V(1,2,3) is the pixel intensity at (1,2) pixel location of the image at t = 3 in the video sequence.

Using frame differencing

A motion detection algorithm begins with the segmentation part where foreground or moving objects are segmented from the background. The simplest way to implement this is to take an image as background and take the frames obtained at the time t, denoted by I(t) to compare with the background image denoted by B. Here using simple arithmetic calculations, we can segment out the objects simply by using image subtraction technique of computer vision meaning for each pixels in I(t), take the pixel value denoted by P[I(t)] and subtract it with the corresponding pixels at the same position on the background image denoted as P[B].

In mathematical equation, it is written as:

P[F(t)]=P[I(t)]-P[B]\,

The background is assumed to be the frame at time t. This difference image would only show some intensity for the pixel locations which have changed in the two frames. Though we have seemingly removed the background, this approach will only work for cases where all foreground pixels are moving and all background pixels are static.[2] [3] A threshold “Threshold” is put on this difference image to improve the subtraction (see Image thresholding).

|P[F(t)]-P[F(t+1)]|>\mathrm {Threshold} \,

This means that the difference image’s pixels’ intensities are ‘thresholded’ or filtered on the basis of value of Threshold. [4] The accuracy of this approach is dependent on speed of movement in the scene. Faster movements may require higher thresholds.

……

 

起始,到程式庫之建立與使用︰

How to Use Background Subtraction Methods

  • Background subtraction (BS) is a common and widely used technique for generating a foreground mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by using static cameras.
  • As the name suggests, BS calculates the foreground mask performing a subtraction between the current frame and a background model, containing the static part of the scene or, more in general, everything that can be considered as background given the characteristics of the observed scene.

    Background_Subtraction_Tutorial_Scheme.png
  • Background modeling consists of two main steps:

    1. Background Initialization;
    2. Background Update.

    In the first step, an initial model of the background is computed, while in the second step that model is updated in order to adapt to possible changes in the scene.

  • In this tutorial we will learn how to perform BS by using OpenCV. As input, we will use data coming from the publicly available data set Background Models Challenge (BMC) .

Goals

In this tutorial you will learn how to:

  1. Read data from videos by using cv::VideoCapture or image sequences by using cv::imread ;
  2. Create and update the background model by using cv::BackgroundSubtractor class;
  3. Get and show the foreground mask by using cv::imshow ;
  4. Save the output by using cv::imwrite to quantitatively evaluate the results.

……

 

實難盡此概念的可能性乎??因此一顆顏色特殊的球, Adrian Rosebrock 先生玩的轉也︰

Ball Tracking with OpenCV

……

OpenCV Track Object Movement

 

※ 註︰

imutils

A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and both Python 2.7 and Python 3.

For more information, along with a detailed code review check out the following posts on the PyImageSearch.com blog:

Installation

Provided you already have NumPy, SciPy, Matplotlib, and OpenCV already installed, the imutils package is completely pip-installable:

$ pip install imutils

 

如果將小汽車加裝『雷射指標』︰

Laser diode

A laser diode, or LD also known as injection laser diode or ILD, is an electrically pumped semiconductor laser in which the active laser medium is formed by a p-n junction of a semiconductor diode similar to that found in a light-emitting diode.

The laser diode is the most common type of laser produced with a wide range of uses that include fiber optic communications, barcode readers, laser pointers, CD/DVD/Blu-ray Disc reading and recording, laser printing, laser scanning and increasingly directional lighting sources.

 

能否依樣畫葫蘆耶!!

更別說

Wii

Wii任天堂公司所推出的家用遊戲主機,Wii開發時的代號為 「Revolution」(革命),表示「電視遊戲的革命」。Wii 本體和周邊製品型號的開頭均為「RVL」。任天堂於2006年4月28日[4]在其官方網站宣布了正式名稱,2006年11月19日正式發售。Wii使用前所未見的控制器使用方法、可購買下載遊戲軟體、生活資訊內容、網路的功能等各項服務等均為 Wii 的主要特色。

『Wii』聽起來像是『we』(我們),發音亦類似,強調該主機『老少咸宜』、能讓全家大小都樂在其中的概念。名稱中的「ii」不僅象徵著其獨特設計的控制器,也代表人們聚在一起同樂的形象。

Wii 在全球累計銷量為一億一百六十三萬台,軟體銷售數字則為九億一千六百一五套。[5]

下一代的主機Wii U在美國2012年11月18日發售,任天堂於2011年6月7日在美國洛杉磯舉行的E3遊戲展發表及出展此遊戲機。[6]

 

Wii控制器

 左方的雙節棍形式的模擬控制器連接著Wii控制器
  • Wii的標準控制器類似遙控器般的控制裝置。
  • 底部有外接擴充端子,以有線連接的方式,連接使用雙節棍控制器及Wii傳統控制器。
    • 原有的GC控制器和Wii專屬的DDR跳舞踏墊,可連接於Wii頂部的外接端子。

 

幹麻用那不可見的紅外線哩☆

  • 感應棒(Sensor Bar)
    • 纜線長度: 3.5m
作為Wii的指標功能,於棒的兩端安裝紅外線LED,紅外線發光後讓Wii遙控器所內建的 CMOS 感應器攝影捕捉後取得與電視的距離、遙控器的姿勢資料等移動資訊。最大的辨識距離為五公尺,建議使用的距離為1~3公尺。
與Wii本體以纜線連接,放置於顯示器的上或下方。

 

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《二》

雖然不知為何

OpenCV

OpenCV的全稱是Open Source Computer Vision Library,是一個跨平台的電腦視覺庫。OpenCV是由英特爾公司發起並參與開發,以BSD授權條款授權發行,可以在商業和研究領域中免費使用。OpenCV可用於開發實時的圖像處理電腦視覺以及模式識別程式 。該程式庫也可以使用英特爾公司的IPP進行加速處理。

歷史

OpenCV專案最早由英特爾公司於1999年啟動,致力於CPU密集型的任務,是一個包括如光線追蹤3D顯示的計劃的一部分。早期OpenCV的主要目標是

  • 為推進機器視覺的研究,提供一套開源且最佳化的基礎庫。不重複發明輪子。
  • 提供一個共同的基礎庫,使得開發人員的代碼更容易閱讀和轉讓,促進了知識的傳播。
  • 通過提供一個不需要開源或免費的軟體許可,促進商業應用軟體的開發。
  • OpenCV現在也整合了對CUDA的支援.

OpenCV的第一個預覽版本於2000年在IEEE Conference on Computer Vision and Pattern Recognition公開,並且後續提供了五個測試版本。1.0版本於2006年釋出。

OpenCV的第二個主要版本是2009年10月的OpenCV 2.0。該版本的主要更新包括C++介面,更容易、更類型安全的模式,新的函式,以及對現有實現的最佳化(特別是多核心方面)。現在每6個月就會有一個官方版本[1],並由一個商業公司贊助的獨立小組進行開發。

在2012年8月,OpenCV的支援由一個非盈利性組織(OpenCV.org)來提供,並保留了一個開發者網站[2]和用戶網站。[3]

 

又回到『Intel』老家?

OpenCV 3.2

Dear OpenCV users!

1 year after 3.1 release and after the OpenCV core team has moved back to Intel we are pleased to announce OpenCV 3.2 release, with tons of improvements and bug fixes. 969 patches have been merged and 478 issues (bugs & feature requests) have been closed.

Big thanks to everyone who participated! If you contributed something but your name is missing, please, let us know.

Merry Christmas and Happy New Year!

 

樂見新版有許多改變!

Changes

The detailed list of changes since 3.1 can be found at https://github.com/opencv/opencv/wiki/ChangeLog. Here is the short summary:

Results from 11 GSoC 2016 projects have been submitted to the library:

  • Ambroise Moreau (Delia Passalacqua) – sinusoidal patterns for structured light and phase unwrapping module
  • Alexander Bokov (Maksim Shabunin) – DIS optical flow (excellent dense optical flow algorithm that is both significantly better and significantly faster than Farneback’s algorithm – our baseline), and learning-based color constancy algorithms implementation
  • Tyan Vladimir (Antonella Cascitelli) – CNN based tracking algorithm (GOTURN)
  • Vladislav Samsonov (Ethan Rublee) – PCAFlow and Global Patch Collider algorithms implementation
  • João Cartucho (Vincent Rabaud) – Multi-language OpenCV Tutorials in Python, C++ and Java
  • Jiri Horner (Bo Li) – New camera model and parallel processing for stitching pipeline
  • Vitaliy Lyudvichenko (Anatoly Baksheev) – Optimizations and improvements of dnn module
  • Iric Wu (Vadim Pisarevsky) – Base64 and JSON support for file storage. Use names like “myfilestorage.xml?base64” when writing file storage to store big chunks of numerical data in base64-encoded form.
  • Edgar Riba (Manuele Tamburrano, Stefano Fabri) – tiny_dnn improvements and integration
  • Yida Wang (Manuele Tamburrano, Stefano Fabri) – Quantization and semantic saliency detection with tiny_dnn
  • Anguelos Nicolaou (Lluis Gomez) – Word-spotting CNN based algorithm

Big thanks to all the participants!

There have been many contributions besides GSoC:

  • Greatly improved and accelerated dnn module in opencv_contrib:
    • Many new layers, including deconvolution, LSTM etc.
    • Support for semantic segmentation and SSD networks with samples.
    • TensorFlow importer + sample that runs Inception net by Google.
  • More image formats and camera backends supported
  • Interactive camera calibration app
  • Multiple algorithms implemented in opencv_contrib
  • Supported latest OSes, including Ubuntu 16.04 LTS and OSX 10.12
  • Lot’s of optimizations for IA and ARM archs using parallelism, vector instructions and new OpenCL kernels.
  • OpenCV now can use vendor-provided OpenVX and LAPACK/BLAS (including Intel MKL, Apple’s Accelerate, OpenBLAS and Atlas) for acceleration

 

豈可不嚐鮮乎??只好再嚐試編譯也!!

# Python2 , python3 一箭雙鵰
sudo apt-get install build-essential git cmake pkg-config
sudo apt-get install libtiff5-dev libjasper-dev libpng12-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libgtk2.0-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python-dev
sudo pip install numpy
sudo pip-3.2 install numpy
git clone https://github.com/Itseez/opencv.git
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv
mkdir build
cd build/
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D BUILD_EXAMPLES=ON ..
make -j4
sudo make install
sudo ldconfig

 

pi@raspberrypi:~ python Python 2.7.9 (default, Sep 17 2016, 20:26:04)  [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import cv2 >>> print (cv2.__version__) 3.2.0-dev >>>  </pre>   <pre class="lang:default decode:true ">pi@raspberrypi:~ python3
Python 3.4.2 (default, Oct 19 2014, 13:31:11) 
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> print (cv2.__version__)
3.2.0-dev
>>>

 

自己可知耶☆☆

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《一》

眼前『計算機視覺』當道︰

Computer vision

Computer vision is an interdisciplinary field that deals with how computers can be made for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.[1][2][3]

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.[4][5][6][7] Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.[8]

As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.

Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, object pose estimation, learning, indexing, motion estimation, and image restoration.

Definition

Computer vision is an interdisciplinary field that deals with how computers can be made for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.[1][2][3] “Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding.” [9] As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.

History

In the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the human visual system, as a stepping stone to endowing robots with intelligent behavior.[10] In 1966, it was believed that this could be achieved through a summer project, by attaching a camera to a computer and having it “describe what it saw”.[11][12]

What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation.[10]

The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields.[13] By the 1990s, some of the previous research topics became more active than the others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering.[10]

Recent work has seen the resurgence of feature-based methods, used in conjunction with machine learning techniques and complex optimization frameworks.[14][15]

Computer vision for people counter purposes in public places, malls, shopping centres

 

因此 OpenCV 成為『顯學』矣。

OpenCV is released under a BSD license and hence it’s free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong focus on real-time applications. Written in optimized C/C++, the library can take advantage of multi-core processing. Enabled with OpenCL, it can take advantage of the hardware acceleration of the underlying heterogeneous compute platform.

Adopted all around the world, OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 14 million. Usage ranges from interactive art, to mines inspection, stitching maps on the web or through advanced robotics.

 

雖說並非缺乏『文件』與『範例』︰

Quick Links

Online documentation

Tutorials

User Q&A forum

Report a bug

Build farm

Developer site

Wiki

 

只是難於掌握眾多『術語』和『概念』之大雜燴。聽聞谷歌搜尋

『OpenCV free book』

,可以找到 2008 年發行,舊版的

『Learning OpenCV』

因為無法判斷著作權狀況,還請讀者自行斟酌下載哩。

該書

CHAPTER 10
Tracking and Motion

 

Motion analysis

Several tasks relate to motion estimation where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene, or even of the camera that produces the images . Examples of such tasks are:

  • Egomotion – determining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera.
  • Tracking – following the movements of a (usually) smaller set of interest points or objects (e.g., vehicles or humans) in the image sequence.
  • Optical flow – to determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result both of how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene.

 

或能為小汽車『開光』耶☆

 

 

 

 

 

 

 

 

GoPiGo 小汽車︰朝向目標前進《零》

之所以起始於『零』,代前言也。玩過『GoPiGo』者或知要它走『台步』依『規矩』困難乎?更不要說能仿效『小海龜』運動呀!由於涉及許多『物理概念』難講矣!!故歸之於『零』,假裝讀者『自明』哩??

Rolling resistance

Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc. is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below), and permanent (plastic) deformation of the object or the surface (e.g. soil). Another cause of rolling resistance lies in the slippage between the wheel and the surface, which dissipates energy. Note that only the last of these effects involves friction, therefore the name “rolling friction” is to an extent a misnomer.

In analogy with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction.[1]

Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, speed,[2] load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete.

Figure 1  Hard wheel rolling on and deforming a soft surface, resulting in the reaction force R from the surface having a component that opposes the motion. (W is some vertical load on the axle, F is some towing force applied to the axle, r is the wheel radius, and both friction with the ground and friction at the axle are assumed to be negligible and so are not shown. The wheel is rolling to the left at constant speed.) Note that R is the resultant force from non-uniform pressure at the wheel-roadbed contact surface. This pressure is greater towards the front of the wheel due to hysteresis.

Primary cause

Asymmetrical pressure distribution between rolling cylinders due to viscoelastic material behavior (rolling to the right).[3]

The primary cause of pneumatic tire rolling resistance is hysteresis:[4]

A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber.

— National Academy of Sciences[5]

This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate.

The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion.

Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction.[6] Note that railroads also have hysteresis in the roadbed structure.[7]

 

並非作者不願說呦,跨領域越科系,難言專業啊★

Straight line mechanism

In the late seventeenth century, before the development of the planer and the milling machine, it was extremely difficult to machine straight, flat surfaces. For this reason, good prismatic pairs without backlash were not easy to make. During that era, much thought was given to the problem of attaining a straight-line motion as a part of the coupler curve of a linkage having only revolute connection. Probably the best-known result of this search is the straight line mechanism development by Watt for guiding the piston of early steam engines. Although it does not generate an exact straight line, a good approximation is achieved over a considerable distance of travel.

Peaucellier–Lipkin linkage:
bars of identical colour are of equal length

 

A Sarrus linkage

 

Roberts linkage

 

退而只講如何輔助『小汽車』之『直線運動』︰

Linear motion

Linear motion (also called rectilinear motion[1]) is a motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion with constant velocity or zero acceleration; non uniform linear motion with variable velocity or non-zero acceleration. The motion of a particle (a point-like object) along a line can be described by its position  x, which varies with  t (time). An example of linear motion is an athlete running 100m along a straight track.[2]

Linear motion is the most basic of all motion. According to Newton’s first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.[3]

One may compare linear motion to general motion. In general motion, a particle’s position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.[2]

Neglecting the rotation and other motions of the Earth, an example of linear motion is the ball thrown straight up and falling back straight down.

 

,避免複雜的『動力學』︰

Analytical dynamics

In classical mechanics, analytical dynamics, or more briefly dynamics, is concerned with the relationship between motion of bodies and its causes, namely the forces acting on the bodies and the properties of the bodies (particularly mass and moment of inertia). The foundation of modern-day dynamics is Newtonian mechanics and its reformulation as Lagrangian mechanics and Hamiltonian mechanics.[1][2]

History

The field has a long and important history, as remarked by Hamilton: “The theoretical development of the laws of motion of bodies is a problem of such interest and importance that it has engaged the attention of all the eminent mathematicians since the invention of the dynamics as a mathematical science by Galileo, and especially since the wonderful extension which was given to that science by Newton.” William Rowan Hamilton, 1834 (Transcribed in Classical Mechanics by J.R. Taylor, p. 237[3])

Some authors (for example, Taylor (2005)[3] and Greenwood (1997)[4]) include special relativity within classical dynamics.

Relationship to statics, kinetics, and kinematics

Historically, there were three branches of classical mechanics:

  • statics” (the study of equilibrium and its relation to forces)
  • kinetics” (the study of motion and its relation to forces).[5]
  • kinematics” (dealing with the implications of observed motions without regard for circumstances causing them).[6]

These three subjects have been connected to dynamics in several ways. One approach combined statics and kinetics under the name dynamics, which became the branch dealing with determination of the motion of bodies resulting from the action of specified forces;[7] another approach separated statics, and combined kinetics and kinematics under the rubric dynamics.[8][9] This approach is common in engineering books on mechanics, and is still in widespread use among mechanicians.

Fundamental importance in engineering, diminishing emphasis in physics

Today, dynamics and kinematics continue to be considered the two pillars of classical mechanics. Dynamics is still included in mechanical, aerospace, and other engineering curricula because of its importance in machine design, the design of land, sea, air and space vehicles and other applications. However, few modern physicists concern themselves with an independent treatment of “dynamics” or “kinematics,” nevermind “statics” or “kinetics.” Instead, the entire undifferentiated subject is referred to as classical mechanics. In fact, many undergraduate and graduate text books since mid-20th century on “classical mechanics” lack chapters titled “dynamics” or “kinematics.”[3][10][11][12][13][14][15][16][17] In these books, although the word “dynamics” is used when acceleration is ascribed to a force, the word “kinetics” is never mentioned. However, clear exceptions exist. Prominent examples include The Feynman Lectures on Physics.[18]

List of Fundamental Dynamics Principles

 

,且權充朝向目標前進者的『墊腳石』吧☆

 

 

 

 

 

 

 

 

 

GoPiGo 小汽車《查核 鏡頭》

『鏡頭』就是小汽車的『眼睛』︰

詩經‧國風‧齊風‧猗嗟

猗嗟昌兮,頎而長兮。
抑若颺兮,美目颺兮。
巧趨蹌兮,射則臧兮。

猗嗟名兮,美目清兮。
儀既成兮,終日射侯,
不出正兮,展我甥兮。

猗嗟孌兮,清颺婉兮。
舞則選兮,射則貫兮,
四矢反兮,以禦亂兮。

魯莊公清颺美目來自寬容敦厚乎!所以能放管仲,因此方有

論語‧《憲問》

子貢曰:管仲非仁者與?桓公殺公子糾,不能死,又相之。

子曰:管仲相桓公,霸諸侯,一匡天下,民到于今受其賜。微管仲 ,吾其被髮左衽矣。豈若匹夫匹婦之為諒也,自經於溝瀆,而莫之知也。

論語褒貶耶?

之前我們曾經宣說 Michael Nielsen 之《神經網絡與深度學習Neural Networks and Deep Learning 大作。乃今談談『辨物識人』眼睛的光學原理,也算前後完整的吧。

尚未講『眼睛』之□○︰

(亦稱眼睛招子)是視覺器官,可以感知光線,轉換為神經中電化學的脈衝。比較複雜的眼睛是一個光學系統,可以收集周遭環境的光線,藉由虹膜調整進入眼睛的強度,利用可調整的晶狀體聚焦,投射到對光敏感的視網膜產生影像,將影像轉換為電的訊號,透過視神經傳遞到大腦視覺系統及其他部份。眼睛依其辨色能力可以分為十種不同的種類,有96%的動物其眼睛都是複雜的光學系統[1]。其中軟體動物脊索動物節肢動物的眼睛有成像的功能[2]

微生物的「眼睛」構造最簡單,只偵測環境的暗或是亮,這對於晝夜節律牽引有關[3]。若是更複雜的眼睛,視網膜上的感光神經節細胞沿著視網膜下視丘路徑傳送信號到視叉上核來影響影響生理調節,也送到頂蓋前核控制瞳孔光反射

Schematic_diagram_of_the_human_eye_zh-hans.svg

─── 摘自《光的世界︰【□○閱讀】話眼睛《一》

 

焉能不好好對待乎?所以特為小汽車加裝『顧盼伺服器』耶??

Attach the Raspberry Pi Camera With the Servo

Attach a Camera or Ultrasonic Sensor to the Raspberry Pi Robot GoPiGo (1)

Start with the Raspberry Pi Camera.

This tutorial assumes you have already built the servo package.  For instructions on building the servo package please see this.

 

#!/usr/bin/env python
########################################################################       
# This example demonstrates controlling the Servo on the GoPiGo robot.
# In this example, we control the servo with a keyboard.  When you run
# this example from the command line, you'll be prompted for input
# Press a key (a, d, or s) to move the servo.  The data is collected, 
# and sent to the GoPiGo.
#
# http://www.dexterindustries.com/GoPiGo/
# History
# ------------------------------------------------
# Author     Date               Comments
# Karan      21 Aug 14          Initial Authoring
#                                                                                
'''
## License
 GoPiGo for the Raspberry Pi: an open source robotics platform for the Raspberry Pi.
 Copyright (C) 2017  Dexter Industries

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <http://www.gnu.org/licenses/gpl-3.0.txt>.
'''
#
########################################################################
from gopigo import *

servo_pos=90
print "CONTROLS"
print "a: move servo left"
print "d: move servo right"
print "s: move servo home"
print "Press ENTER to send the commands"

while True:
        #Get the input from the user and change the servo angles
        inp=raw_input()                         # Get keyboard input.
        # Now decide what to do with that keyboard input.  
        if inp=='a':
                servo_pos=servo_pos+10  # If input is 'a' move the servo forward 10 degrees.
        elif inp=='d':
                servo_pos=servo_pos-10  # If the input is 'd' move the servo backward by 10 degrees.
        elif inp=='s':
                servo_pos=90

        #Get the servo angles back to the normal 0 to 180 degree range
        if servo_pos>180:
                servo_pos=180
        if servo_pos<0:
                servo_pos=0

        servo(servo_pos)                # This function updates the servo with the latest positon.  Move the servo.
        time.sleep(.1)                  # Take a break in between operations.  

 

若是祇為著『穩固固定』也!說來有點所費不貲哩!!

 

 

 

 

 

 

 

 

 

輕。鬆。學。部落客