Share
VIDEOS 1 TO 50
Stereo 3D Vision (How to avoid being dinner for Wolves) - Computerphile
Stereo 3D Vision (How to avoid being dinner for Wolves) - Computerphile
Published: 2016/02/24
Channel: Computerphile
EGGN 512 - Lecture 21-1 Stereo Vision
EGGN 512 - Lecture 21-1 Stereo Vision
Published: 2012/03/19
Channel: William Hoff
CVFX Lecture 15: Stereo correspondence
CVFX Lecture 15: Stereo correspondence
Published: 2014/03/20
Channel: Rich Radke
SP1 Real-Time Stereo Vision System
SP1 Real-Time Stereo Vision System
Published: 2015/10/28
Channel: Nerian Vision Technologies
Lecture 16: Stereo
Lecture 16: Stereo
Published: 2012/11/20
Channel: UCF CRCV
A Tutorial on Stereo Vision for 3D Depth Perception (Preview)
A Tutorial on Stereo Vision for 3D Depth Perception (Preview)
Published: 2015/02/27
Channel: Embedded Vision Alliance
EGGN 512 - Lecture 21-3 Stereo Vision
EGGN 512 - Lecture 21-3 Stereo Vision
Published: 2012/03/19
Channel: William Hoff
Stereo vision with GoPro Hero3 and algorithms such as BM, SGBM, ADCensus (+ source code @ GitHub)
Stereo vision with GoPro Hero3 and algorithms such as BM, SGBM, ADCensus (+ source code @ GitHub)
Published: 2014/05/17
Channel: Dennis Lünsch
Computer Vision System Design Deep Learning and 3D Vision
Computer Vision System Design Deep Learning and 3D Vision
Published: 2017/06/30
Channel: MATLAB
EGGN 512 - Lecture 21-2 Stereo Vision
EGGN 512 - Lecture 21-2 Stereo Vision
Published: 2012/03/19
Channel: William Hoff
Javascript Computer Stereo Vision Trips - JCSV
Javascript Computer Stereo Vision Trips - JCSV
Published: 2017/07/20
Channel: Mario Abbruscato
3d reconstruction with stereo cameras
3d reconstruction with stereo cameras
Published: 2014/05/02
Channel: Daniel Lee
Measure distance experiment- using OpenCV- Stereo Vision
Measure distance experiment- using OpenCV- Stereo Vision
Published: 2014/08/08
Channel: Huy Nguyen Dinh
Determining Distance with Stereo Vision and MATLAB (PowerPoint presentation)
Determining Distance with Stereo Vision and MATLAB (PowerPoint presentation)
Published: 2015/08/11
Channel: aquaimmy
Automatic Camera Re-Calibration for Robust Stereo Vision
Automatic Camera Re-Calibration for Robust Stereo Vision
Published: 2016/03/04
Channel: Nerian Vision Technologies
stereo vision using raspberry pi
stereo vision using raspberry pi
Published: 2015/05/25
Channel: nalan karunanayake
Hardware Accelerated Stereo Vision
Hardware Accelerated Stereo Vision
Published: 2014/06/12
Channel: Manuel Espinoza
Coherent Depth in Stereo Vision
Coherent Depth in Stereo Vision
Published: 2016/08/17
Channel: Microsoft Research
Demo Stereo Vision using Matlab example
Demo Stereo Vision using Matlab example
Published: 2012/07/09
Channel: PEET ROBO
Stereo vision Depth extraction - DUTh
Stereo vision Depth extraction - DUTh
Published: 2009/04/06
Channel: GryphonLab
Stereo Vision - Depth Map
Stereo Vision - Depth Map
Published: 2016/09/23
Channel: Wojciech Mormul
Real-time Hybrid Stereo Vision System for HD resolution disparity map(BMVC2014)
Real-time Hybrid Stereo Vision System for HD resolution disparity map(BMVC2014)
Published: 2014/10/20
Channel: Shumer216
Automotive computer vision: 3D reconstruction & distance measurement with stereo camera #2
Automotive computer vision: 3D reconstruction & distance measurement with stereo camera #2
Published: 2015/03/29
Channel: Jan Kučera
Object detection and distance calculation based on stereo vision technique
Object detection and distance calculation based on stereo vision technique
Published: 2011/05/20
Channel: Nguyen Van Duc
Low Resolution Stereo Vision Obstacle Detection (final demo)
Low Resolution Stereo Vision Obstacle Detection (final demo)
Published: 2015/10/19
Channel: Maor Berenfeld
CVFX Lecture 18: Stereo rig calibration and projective reconstruction
CVFX Lecture 18: Stereo rig calibration and projective reconstruction
Published: 2014/03/31
Channel: Rich Radke
stereo vision system and object recognition project
stereo vision system and object recognition project
Published: 2017/07/22
Channel: Yahya Ewida
Stereo Vision Overview
Stereo Vision Overview
Published: 2010/08/09
Channel: FLIR Integrated Imaging Solutions
C++ OpenCV Stereo Vision Trips #001
C++ OpenCV Stereo Vision Trips #001
Published: 2017/05/21
Channel: Mario Abbruscato
Real-time Dense Passive Stereo Vision: Optimizing Computer Vision Applications Using OpenCL on ARM
Real-time Dense Passive Stereo Vision: Optimizing Computer Vision Applications Using OpenCL on ARM
Published: 2015/08/28
Channel: Arm
Mobile robot-Moving and picking object-Using Computer Vision-Stereo Vision
Mobile robot-Moving and picking object-Using Computer Vision-Stereo Vision
Published: 2015/07/22
Channel: Huy Nguyen Dinh
Stereo Vision - Coordinate System Measurement
Stereo Vision - Coordinate System Measurement
Published: 2016/09/15
Channel: Wojciech Mormul
NVIDIA Jetson Partner Stories: Stereolabs Brings Advanced Computer Vision Capabilities to 3D Mapping
NVIDIA Jetson Partner Stories: Stereolabs Brings Advanced Computer Vision Capabilities to 3D Mapping
Published: 2015/11/13
Channel: NVIDIADeveloper
Implementation of a PMF Algorithm - Stereo vision - Computer Vision
Implementation of a PMF Algorithm - Stereo vision - Computer Vision
Published: 2015/12/11
Channel: matteo mori
Stereo Vision - Point Cloud
Stereo Vision - Point Cloud
Published: 2016/09/27
Channel: Wojciech Mormul
Stereo Vision using SimpleCV (Calibration and Disparity Map)
Stereo Vision using SimpleCV (Calibration and Disparity Map)
Published: 2012/08/18
Channel: Vijay Mahantesh SM
Stereo vision experiment using low cost webcams
Stereo vision experiment using low cost webcams
Published: 2010/01/21
Channel: Paul
Point Cloud Demo with Tara - USB 3.0 Stereo vision camera
Point Cloud Demo with Tara - USB 3.0 Stereo vision camera
Published: 2016/06/30
Channel: e-con Systems
C++ OpenCV Stereo Vision Trips #002 | Two in One
C++ OpenCV Stereo Vision Trips #002 | Two in One
Published: 2017/06/15
Channel: Mario Abbruscato
Thermal Stereo Vision - SGBM Full DP
Thermal Stereo Vision - SGBM Full DP
Published: 2017/04/26
Channel: Rasoul Mojtahedzadeh
Stereolabs Gives Computers 3D Vision
Stereolabs Gives Computers 3D Vision
Published: 2016/01/09
Channel: TechCrunch
AI robot with robotic arm and stereovision cameras - demo 1,2,3
AI robot with robotic arm and stereovision cameras - demo 1,2,3
Published: 2015/06/07
Channel: Marek Tučáni
Stereo Vision and Projection
Stereo Vision and Projection
Published: 2010/01/11
Channel: capineirocapaz
Multi View Stereo Vision
Multi View Stereo Vision
Published: 2010/11/12
Channel: deraufschneider
3D tracking with an embedded stereo camera with FPGA onboard processing
3D tracking with an embedded stereo camera with FPGA onboard processing
Published: 2014/11/30
Channel: Computer Vision and Embedded Systems
Humannoid robot Stereo vision
Humannoid robot Stereo vision
Published: 2017/04/28
Channel: GoomGum
Stereo Vision Demonstration
Stereo Vision Demonstration
Published: 2008/07/31
Channel: DeSinc
Ultra Compact Stereo Vision IP
Ultra Compact Stereo Vision IP
Published: 2014/03/13
Channel: Intel FPGA
Kalman Filter application in Computer Vision (1- Stereo vision non-linearity)
Kalman Filter application in Computer Vision (1- Stereo vision non-linearity)
Published: 2015/07/10
Channel: Hamid Bazargani
Stereo vision calibration
Stereo vision calibration
Published: 2012/06/08
Channel: Nam Cao
NEXT
GO TO RESULTS [51 .. 100]

WIKIPEDIA ARTICLE

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Computer stereo vision is the extraction of 3D information from digital images, such as obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examination of the relative positions of objects in the two panels. This is similar to the biological process Stereopsis. Stereoscopic images are often stored as MPO (Multi Picture Object) files. Recently, researchers pushed to develop methods aimed to reduce the storage needed for these files allowing to mantain high quality of the stereo image [1] [2]

Outline[edit]

In traditional stereo vision, two cameras, displaced horizontally from one another are used to obtain two differing views on a scene, in a manner similar to human binocular vision. By comparing these two images, the relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location.

For a human to compare the two images, they must be superimposed in a stereoscopic device, with the image from the right camera being shown to the observer's right eye and from the left one to the left eye.

In a computer vision system, several pre-processing steps are required.[3]

  1. The image must first be undistorted, such that barrel distortion and tangential distortion are removed. This ensures that the observed image matches the projection of an ideal pinhole camera.
  2. The image must be projected back to a common plane to allow comparison of the image pairs, known as image rectification.
  3. An information measure which compares the two images is minimized. This gives the best estimate of the position of features in the two images, and creates a disparity map.
  4. Optionally, the received disparity map is projected into a 3d point cloud. By utilising the cameras' projective parameters, the point cloud can be computed such that it provides measurements at a known scale.

Active stereo vision[edit]

The active stereo vision is a form of stereo vision which actively employs a light such as a laser or a structured light to simplify the stereo matching problem. The opposed term is passive stereo vision.

Conventional structured-light vision (SLV)[edit]

The conventional structured-light vision (SLV) employs a structured light or laser, and finds projector-camera correspondences.[4][5]

Conventional active stereo vision (ASV)[edit]

The conventional active stereo vision (ASV) employs a structured light or laser, however, the stereo matching is performed only for camera-camera correspondences, in the same way as the passive stereo vision.

Structured-light stereo (SLS)[6][edit]

There is a hybrid technique, which utilizes both camera-camera and projector-camera correspondences.[6]

Applications[edit]

3D stereo displays finds many applications in entertainment, information transfer and automated systems. Stereo vision is highly important in fields such as robotics, to extract information about the relative position of 3D objects in the vicinity of autonomous systems. Other applications for robotics include object recognition, where depth information allows for the system to separate occluding image components, such as one chair in front of another, which the robot may otherwise not be able to distinguish as a separate object by any other criteria.

Scientific applications for digital stereo vision include the extraction of information from aerial surveys, for calculation of contour maps or even geometry extraction for 3D building mapping, or calculation of 3D heliographical information such as obtained by the NASA STEREO project.

Detailed definition[edit]

Diagram describing relationship of image displacement to depth with stereoscopic images, assuming flat co-planar images.

A pixel records color at a position. The position is identified by position in the grid of pixels (x, y) and depth to the pixel z.

Stereoscopic vision gives two images of the same scene, from different positions. In the diagram on the right light from the point A is transmitted through the entry points of a pinhole cameras at B and D, onto image screens at E and H.

In the attached diagram the distance between the centers of the two camera lens is BD = BC + CD. The triangles are similar,

  • ACB and BFE
  • ACD and DGH

  • k = BD BF
  • z = AC is the distance from the camera plane to the object.

So assuming the cameras are level, and image planes are flat on the same plane, the displacement in the y axis between the same pixel in the two images is,

Where k is the distance between the two cameras times the distance from the lens to the image.

The depth component in the two images are and , given by,

These formulas allow for the occlusion of voxels, seen in one image on the surface of the object, by closer voxels seen in the other image, on the surface of the object.

Image rectification[edit]

Where the image planes are not co-planar image rectification is required to adjust the images as if they were co-planar. This may be achieved by a linear transformation.

The images may also need rectification to make each image equivalent to the image taken from a pinhole camera projecting to a flat plane.

Least squares information measure[edit]

The normal distribution is

Probability is related to information content described by message length L,

so,

For the purposes of comparing stereoscopic images, only the relative message length matters. Based on this, the information measure I, called the Sum of Squares of Differences (SSD) is,

where,

Other measures of information content[edit]

Because of the cost in processing time of squaring numbers in SSD, many implementations use Sum of Absolute Difference (SAD) as the basis for computing the information measure. Other methods use normalized cross correlation (NCC).

Information measure for stereoscopic images[edit]

The least squares measure may be used to measure the information content of the stereoscopic images ,[7] given depths at each point . Firstly the information needed to express one image in terms of the other is derived. This is called .

A color difference function should be used to fairly measure the difference between colors. The color difference function is written cd in the following. The measure of the information needed to record the color matching between the two images is,

An assumption is made about the smoothness of the image. Assume that two pixels are more likely to be the same color, the closer the voxels they represent are. This measure is intended to favor colors that are similar being grouped at the same depth. For example, if an object in front occludes an area of sky behind, the measure of smoothness favors the blue pixels all being grouped together at the same depth.

The total measure of smoothness uses the distance between voxels as an estimate of the expected standard deviation of the color difference,

The total information content is then the sum,

The z component of each pixel must be chosen to give the minimum value for the information content. This will give the most likely depths at each pixel. The minimum total information measure is,

The depth functions for the left and right images are the pair,

Smoothness[edit]

Smoothness is a measure of how similar colors that are close together are. There is an assumption that objects are more likely to be colored with a small number of colors. So if we detect two pixels with the same color they most likely belong to the same object.

The method described above for evaluating smoothness is based on information theory, and an assumption that the influence of the color of a voxel influencing the color of nearby voxels according to the normal distribution on the distance between points. The model is based on approximate assumptions about the world.

Another method based on prior assumptions of smoothness is auto-correlation.

Smoothness is a property of the world. It is not inherently a property of an image. For example, an image constructed of random dots would have no smoothness, and inferences about neighboring points would be useless.

Theoretically smoothness, along with other properties of the world should be learnt. This appears to be what the human vision system does.

Methods of implementation[edit]

The minimization problem is NP-complete. This means a general solution to this problem will take a long time to reach. However methods exist for computers based on heuristics that approximate the result in a reasonable amount of time. Also methods exist based on neural networks.[8] Efficient implementation of stereoscopic vision is an area of active research.

See also[edit]

References[edit]

  1. ^ Ortis, A., Rundo, F., Di Giore, G., & Battiato, S. (2013, September). Adaptive compression of stereoscopic images In International Conference on Image Analysis and Processing (pp. 391-399). Springer, Berlin, Heidelberg.
  2. ^ Ortis, Alessandro, and Sebastiano Battiato. "A new fast matching method for adaptive compression of stereoscopic images." Three-Dimensional Image Processing, Measurement (3DIPM), and Applications 2015. Vol. 9393. International Society for Optics and Photonics, 2015.
  3. ^ Bradski, Gary; Kaehler, Adrian. Learning OpenCV: Computer Vision with the OpenCV Library. O'Reilly. 
  4. ^ C. Je, S. W. Lee, and R.-H. Park. High-Contrast Color-Stripe Pattern for Rapid Structured-Light Range Imaging. Computer Vision – ECCV 2004, LNCS 3021, pp. 95–107, Springer-Verlag Berlin Heidelberg, May 10, 2004.
  5. ^ C. Je, S. W. Lee, and R.-H. Park. Colour-Stripe Permutation Pattern for Rapid Structured-Light Range Imaging. Optics Communications, Volume 285, Issue 9, pp. 2320-2331, May 1, 2012.
  6. ^ a b W. Jang, C. Je, Y. Seo, and S. W. Lee. Structured-Light Stereo: Comparative Analysis and Integration of Structured-Light and Active Stereo for Measuring Dynamic Shape. Optics and Lasers in Engineering, Volume 51, Issue 11, pp. 1255-1264, November, 2013.
  7. ^ Lazaros, Nalpantidis; Sirakoulis, Georgios Christou; Gasteratos1, Antonios (2008). "REVIEW OF STEREO VISION ALGORITHMS: FROM SOFTWARE TO HARDWARE". International Journal of Optomechatronics. 2: 435–462. doi:10.1080/15599610802438680. 
  8. ^ WANG, JUNG-HUA; HSIAO, CHIH-PING (1999). "On disparity matching in stereo vision via a neural network framework". Proc. Natl. Sci. Counc. ROC(A). 23 (5): 665–678. 

External links[edit]

Disclaimer

None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.

All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.

The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely.

Powered by YouTube
Wikipedia content is licensed under the GFDL and (CC) license