Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 105 articles
Browse latest View live

Matching/Rectify Images with different Sizes

$
0
0
Hi, I want to match the Pixels of two different Cameras. I already have the camera matrix of both cameras and the rvecs and tvecs. I also have the images undistorted. Now, after I have the matrices R and T, i would call cv::stereoRectify. The problem is, that this functions needs a image size as parameter but my cameras have different image sizes and worse, different aspect ratios. Is there a way to rectify/match the images with different sizes and aspect ratios? I cannot just scale one image up to the size of the other, because this would deform the image (due to the aspect ratio). ~Tankard

Output of triangulation is flat (z value near 0)

$
0
0
Hello i try to treangulte points with stereo cameras. The output has always a z value near 0. ![image description](/upfiles/14279667042760975.jpg) triangulatePoints Input (and projectionMatrix1,2): Mat projPoints1 [380, 181, 365; 130, 229, 238] Mat projPoints2 [305, 104, 291; 165, 265, 273] triangulatePoints Output: Mat points4D [0.8609779430113023, 0.3817725390523637, 0.7116126550869795; 0.5086202250537253, 0.9242461720198268, 0.7025628565436543; 0.002441490946153247, 0.002977891704646576, 0.00220898865581931; 0.004060472421340527, 0.003142306987421266, 0.002825260058571214] StereoCalibrate Output Rotation Matrix [0.9999959644741038, 0.002840952105795514, 5.161359762791157e-006; -0.002840952056622284, 0.9999959644423463, -9.509666388605118e-006; -5.188355440578255e-006, 9.494964836467141e-006, 0.9999999999414633] Translation Vector [-89.9571208702335; 21.25152405495268; 0.001574274342767844] StereoRectify Output projectionMatrix1 [1, 0, 65.52658225987989, 0; 0, 1, -65.26453360733174, 0; 0, 0, 1, 0] projectionMatrix2 [1, 0, 65.52658225987989, -92.43327794900694; 0, 1, -65.26453360733174, 0; 0, 0, 1, 0] Calibrate Function public void calibrateSingle (final Size patternSize, final Mat[] images) { List objectPoints = new ArrayList(); objectPoints.add(getCorner3f(patternSize)); List imagePoints1 = new ArrayList(); final MatOfPoint2f corners1 = new MatOfPoint2f(); if ( !getCorners(images[0], corners1, patternSize) ) { return; } imagePoints1.add(corners1); List imagePoints2 = new ArrayList(); final MatOfPoint2f corners2 = new MatOfPoint2f(); if ( !getCorners(images[1], corners2, patternSize) ) { return; } imagePoints2.add(corners2); final Mat cameraMatrix1 = new Mat(); final Mat cameraMatrix2 = new Mat(); final Mat distCoeff1 = new Mat(); final Mat distCoeff2 = new Mat(); Calib3d.stereoCalibrate( objectPoints, imagePoints1, imagePoints2, cameraMatrix1, distCoeff1, cameraMatrix2, distCoeff2, images[0].size(), rotation, translation, essential, fundament ); Calib3d.stereoRectify( cameraMatrix1, distCoeff1, cameraMatrix2, distCoeff2, images[0].size(), rotation, translation, rectification1, rectification2, projectionMatrix1, projectionMatrix2, dtdMatrix ); } Triangulate Function public Mat triangulatePoints (final Mat projPoints1, final Mat projPoints2) { final Mat points4D = new Mat(); Calib3d.triangulatePoints(projectionMatrix1, projectionMatrix2, projPoints1, projPoints2, points4D); return points4D; } I try many tings with using Calib3d.correctMatches and calibrate the Cameras before stereo Calibrate and then use this distCoffs and cameraMatrix. But i get every time the same result. If i came closer to the objects wiht the cameras. The triangele go smaler and closer to the Point 0,0,0. I try many things the last days and have no more ideas. Thanks for help.

Disparity map issues

$
0
0
I've created a small stereo camera rig with an Arduino and some [cheap serial cameras](http://www.adafruit.com/product/1386). My goal is to be able to (ideally) reconstruct a scene in 3D from a pair of stereo pictures. However, the disparity maps I am getting are of very poor quality, much worse than some of the examples I have seen online. I'm worried it's just because the cameras are really cheap, but I wanted to ask here first in case there is something I am doing wrong. Stereo calibration seems to work fine; the error value returned was around 0.3, and the rectified images seem to line up correctly. I've been messing around with the input parameters, and the image below is basically the best I've been able to get. As you can see, it's far from great. ![Disparity map](http://i.stack.imgur.com/VMKD0.jpg) I've tested the input parameters pretty extensively, so I don't think that's the problem. So I think that narrows it down to either the calibration or the cameras themselves. Anyone with more experience than me have any ideas?

How to calibrate cameras for StereoBM depth map

$
0
0
Is there any documentation or an example on how to use/calibrate OpenCV's StereoBM with two arbitrary cameras with an arbitrary B and f? There's a [simple example](http://docs.opencv.org/trunk/dd/d53/tutorial_py_depthmap.html) in the tutorials that shows some minimal code, but it doesn't explain how to configure it for your own B and f, assuming a pre-calibrated setup which likely doesn't match your own setup. I tried running the example code on two images taken from some webcams placed 2 inches apart, and it just returns a completely gray screen, which I take to mean the default StereoBM assumes a completely different calibration.

OpenCV Stereo Calibration of IR-Cameras

$
0
0
I have 2 webcams with removed IR blocking filters and applied visible light blocking filters. Thus, both cameras can only see IR light. So I can not calibrate the stereo cameras by oberserving points on a chessboard (because I don't see the chessboard). Instead of this I had the idea to use some amount of IR-LEDs as a tracking pattern. I could attach the LEDs on some chessboard, for instance. AFAIK, the OpenCV stereoCalibrate function awaits the objectPoints, as well as the imagePoints1 and imagePoints2 and will return both camera matrices, distortion coeffs as well as the fundamental matrix. How many points in my images do I need to detect in order to get the function running appropriate? For the fundamental matrix I know the eight-point algorithm. So, are 8 points enough? The problem is, I don't want to use a huge amount of IR-LEDs as a tracking pattern. Are there some better ways to do so?

stereo calibrate using pre-defined intrinsics

$
0
0
I have a stereo camera rig. Each individual camera has a pre-calculated matrix. fx 0 cx 0 fy cy 0 0 1 and the camera output is already rectified, so the checkerboard images will be also. I now need the relative transformation between the 2 cameras. Is it possible to give stereoCalibrate the intrinsic matrix info that I already have, and just get the R and T matrices back? How to go about this? i have tried manually adding the Mtrix info: Matx33f M1(716.917, 0, 328.413, 0, 716.917, 240.039, 0, 0, 1); Matx33f M2(775.737, 0, 328.381, 0, 775.737, 240.516, 0, 0, 1); but it gives me grey image output and a reprojection error 1.#IND. I am using the stereo_calib example, and the relevant code I have is: Mat cameraMatrix[2], distCoeffs[2]; cameraMatrix[0] = Mat::eye(3, 3, CV_64F); cameraMatrix[1] = Mat::eye(3, 3, CV_64F); Mat R, T, E, F; double rms = stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1], cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1], imageSize, R, T, E, F, TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5), CV_CALIB_FIX_ASPECT_RATIO + CV_CALIB_ZERO_TANGENT_DIST + CV_CALIB_SAME_FOCAL_LENGTH + CV_CALIB_RATIONAL_MODEL + CV_CALIB_FIX_K3 + CV_CALIB_FIX_K4 + CV_CALIB_FIX_K5);

Position and Rotation of two cameras. Which function's I need? In what order?

$
0
0
I would like to compute the relative position and Rotation of two cameras with openCv. I use two images with dots. Here are the two images. ![image description](http://www.bilder-upload.eu/thumb/f8192d-1434634935.jpg) http://www.bilder-upload.eu/show.php?file=f8192d-1434634935.jpg ![image description](http://www.bilder-upload.eu/thumb/3bca07-1434634992.jpg) http://www.bilder-upload.eu/show.php?file=3bca07-1434634992.jpg I know the horizontal value (X) and vertical value (Y) of each points in the two images. I would like to compute the relative position and rotation of the two cameras. I tested with Blender if that's possible. With motion tracking Blender was able to compute the relative position and rotation. It takes 8 points or more. The result is very exact. Here is my Blender test 3D View. ![image description](http://www.bilder-upload.eu/thumb/3a4c70-1434635023.jpg) http://www.bilder-upload.eu/show.php?file=3a4c70-1434635023.jpg I found many openCv functions. But I do not know which function's I need. stereoCalibrate ? findFundamentalMat? Which function's I need? In what order?

stereoRectify makes strange images distortion

$
0
0
Hello. I am trying to compute point cloud from two cameras. 1. Find chessboard corners using **findChessboardCorners** 2. Calibrate cameras using **stereoCalibrate** 3. Calculate rectify matrix using **stereoRectify** 4. Calculate remap maps using **initUndistortRectifyMap** 5. Remap images And I got a very bad results: ![image description](http://i.imgur.com/hO0QgmD.jpg) It's not so bad like this: ![image description](http://i.imgur.com/mfRUZqL.jpg) Is the any way to fix it? This is my code (also available on [GitHub](https://github.com/Garrus007/Vision3D)): // left - vector of grayscale images from left camera // right - vector of grayscale images from right camera // patternSize - number of chessboard corners StereoCalibData StereoVision::Calibrate(const std::vector& left, const std::vector& right, cv::Size patternSize) { cv::Size imSize; // Images points std::vector imagePointsLeftSingle, imagePointsRightSingle; std::vector> imagePointsLeft, imagePointsRight; // Object points std::vector objectPointsSingle; std::vector> objectPoints; //stereoCalibrate output cv::Mat CM1(3, 3, CV_64FC1), CM2(3, 3, CV_64FC1); //Camera matrix cv::Mat D1, D2; //Distortion cv::Mat R, //Rotation matrix T, //Translation matrix E, //Существенная матрица F; //Фундаментальная матрица //stereoRectify output cv::Mat R1, R2, P1, P2; cv::Mat Q; //1. --------------------------- Find chessboard corners --------------------------------------------------------------- for (int i = 0; i < left.size(); i++) { //Finding chessboard corners bool isFoundLeft = cv::findChessboardCorners(left[i], patternSize, imagePointsLeftSingle, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS); bool isFoundRight = cv::findChessboardCorners(right[i], patternSize, imagePointsRightSingle, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS); imSize = left[i].size(); cornerSubPix(left[i], imagePointsLeftSingle, cv::Size(11, 11), cv::Size(-1, -1), cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1)); cornerSubPix(right[i], imagePointsRightSingle, cv::Size(11, 11), cv::Size(-1, -1), cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1)); //Sometimes chessboard corners are found in bad direction (vertically lines), so I "rotate" them _fixChessboardCorners(imagePointsLeftSingle, patternSize); _fixChessboardCorners(imagePointsRightSingle, patternSize); //Add to the corners vectors imagePointsLeft.push_back(imagePointsLeftSingle); imagePointsRight.push_back(imagePointsRightSingle); } //2. --------------------------- Compute object points ------------------------------------------------------------------ for (int j = 0; j

Get [R|t] of two kinnect cameras

$
0
0
My aim is to determine the correct tranformation between the coordinate systems of two kinnect cameras, based on chessboard patterns. (the base unit would be meter) I am basicly using stereo_calib.cpp sample. (with the chessboard unit set correctly) ![image description](http://i.imgbox.com/TOoJ6Lqg.png) ![image description](http://i.imgbox.com/UVmQjo5c.png) Using 5 pairs, the reprojection error is 3.32. R: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ -1.2316533437904904e-002, 5.2568656248225909e-001, -8.5058917288527658e-001, -5.7551867406562152e-001, 6.9190195114274877e-001, 4.3594718235883412e-001, 8.1769588405826155e-001, 4.9489931100219109e-001, 2.9402059989690299e-001 ] T: !!opencv-matrix rows: 3 cols: 1 dt: d data: [ 1.2315362401819163e+000, -9.6293867563953039e-001, 1.5791089847312716e+000 ] Having a valid point pair, P_k1 = {0.01334677 -0.3134326 2.604} and P_k2 = {-0.9516979 -0.3950531 2.483} the given R and T does not seem to be right. I am assuming the P_k2 = R*P_k1 + T formula. Any idea where the error comes from, or how to improve my results?

OpenCV Error: Assertion failed (abs_max < threshold) in stereoCalibrate

$
0
0
I am trying to run fisheye::stereoCalibrate but I run into an exception for CV_Assert(abs_max < threshold); // bad stereo pair This is an error I get when trying to run multiple images, however if I remove a particular image from the list it does not throw the exception. But on trying to run fisheye::stereoCalibrate for just that image I don't get any exception. Any suggestions as to why this might be happening? I have computed the intrinsics for each camera individually with a reproduction error of the order 0.1 , so I don't think that is the issue.

What precision do you get with stereoCalibrate?

$
0
0
Hi Guys, I have a stereo rig that I am trying to calibrate but I am having trouble with my precision. I would therefore ask what level of precision is normal when working with stereo calibrate. I feel that my results are constantly 10 degrees off directionwise and a couple of centimetres translation wise. Im using a 6*8 checkerboard with squares around 22mm. The dataset I am working from is around 50 pictures. What results have you achieved using stereo calibrate? I am using an external model for the lens distortion (Ocam_calib) so I am passing in idealised coordinates and fixing the intrinsics matrix. This might be the source of my problem, but I am seeking other sources as well. Any ideas? Kind regards Jesper

What is the origin of the world coordinates after stereo rectification?

$
0
0
How can i set where is the origin point after stereo rectification and disparity map, and what are the X-Y-Z axes? Thanks a lot!

Stereo calibration

$
0
0
I have been trying stereo calibration. I have taken chessboard images from two different cameras. The cameras are kept on a level table. Single calibration of the two cameras is correct I think because they're the same mobile model and the focal length results are similar. But I have a doubt about the stereo calibration results because my translation vector is non zero in all three directions with big values. But they are just kept on a level table, side by side. z component of the translation vector should be zero, right?

fisheye model not working with ROI?

$
0
0
I have a ultra wide angle stereo camera set up and trying stereo calibration with fisheye model in 2.4.11. However, [fisheye stereo calibration](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#fisheye-stereorectify) doesn't get a roi parameter? I use input images like: ![image description](/upfiles/14452272073360014.jpg) I calibrate each camera with ` cv::fisheye::calibrate(objectPoints, imagePoints[0], imageSize, cameraMatrix[0], distCoeffs[0], rvecs, tvecs, fisheye::CALIB_RECOMPUTE_EXTRINSIC + fisheye::CALIB_CHECK_COND + fisheye::CALIB_FIX_SKEW ,TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5));` which produce: ![image description](/upfiles/14452272839561073.jpg) then I stereo calibrate with ` double rms = cv::fisheye::stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1], cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1], imageSize, R, T, fisheye::CALIB_FIX_INTRINSIC + fisheye::CALIB_USE_INTRINSIC_GUESS + fisheye::CALIB_CHECK_COND + fisheye::CALIB_FIX_SKEW , TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,200, 1e-5));` and then rectify images looks like:![image description](/upfiles/14452274962531647.jpg) obviously that there is blank region in the rectified images and FOV is bigger then the single calibrated. So, how should I get the roi of the fisheye stereo calibration?

How can I use triangulatePoints() if I have only CameraParams?

$
0
0
How can I get 3x4 projection matricies of the first and the second cameras to use [`triangulatePoints()`](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=calibratecamera#triangulatepoints) if I have only [`struct detail::CameraParams`](http://docs.opencv.org/2.4/modules/stitching/doc/camera.html?highlight=cameraparams#detail::CameraParams) for each of two cameras which I got from [`detail::Estimator`](http://docs.opencv.org/2.4/modules/stitching/doc/motion_estimation.html?highlight=homographybasedestimator#)?

Stereo calibration sample producing zoomed-out rectification

$
0
0
I have the stereo calibration sample working perfectly using the supplied chessboard frames, but when using my own images, that have the same VGA resolution and similar chessboard poses, the resultant rectified images are reduced to the centre area of the VGA image, i.e. the camera has zoomed out, also both are rotated 20 degrees to counterclockwise. I suppose the R and T values from stereocalibrate are causing this, but why? My own camera-grabbed images have been made to closely match the sample images, and look almost identical. The only obvious difference is that my chessboard is 10 x 7, and the sample's is 8 x 6 - but I did change the board parameter to 9 x 6! I'm getting and RMS error of 0.632, Avg of 1.208 What could be causing the correctly rectified results to be scaled down, like the camera has been pulled back?

Stereo calibration using known locations

$
0
0
How do you calibrate a camera using known 3D points and their corresponding 2d image locations. Using a chessboard is easy, is it the same method for real world points. How then do you then get the cameras to output data in that coordinate system, so a point on the images gives the same 3D point via triangulation as was used in the calibration?

Python - OpenCV stereocalibration error

$
0
0
I m trying to use the stereo part of openCV (3.0.0-dev version). I can do all the calibration for each cameras so i got the 2 camera matrix and distorsion vectors. But when i try to apply the stereocalibrate function, the calculation is done but the RMS is very high : 40! and so the obtained results seem wrong... But i dont know how the problem is... import numpy as np import cv2 import matplotlib.pyplot as plt import os import sys, getopt from glob import glob import Image a=int(9) #chessboard windows b=int(5) # CAM1 calibration If __name__ == '__main__': args, img_mask = getopt.getopt(sys.argv[1:], '', ['save=', 'debug=', 'square_size=']) args = dict(args) img_mask = '/home/essais-2015-3/Bureau/test_22_01/calib/gauche/*.tif' img_names = sorted(glob(img_mask)) debug_dir = args.get('--debug') square_size = float(args.get('--square_size', 1)) pattern_size = (a,b) pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32 ) pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2) pattern_points *= square_size obj_points = [] img_points1 = [] h, w = 0, 0 for fn in img_names: print 'processing %s...' % fn, img = cv2.imread(fn, 0) h, w = img.shape[:2] found, corners = cv2.findChessboardCorners(img, pattern_size) if found: term = ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1 ) cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term) if debug_dir: vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) cv2.drawChessboardCorners(vis, pattern_size, corners, found) path, name, ext = splitfn(fn) cv2.imwrite('%s/%s_chess.bmp' % (debug_dir, name), vis) if not found: print 'chessboard not found' continue img_points1.append(corners.reshape(-1, 2)) obj_points.append(pattern_points) print 'ok' rms, C1, dist1, rvecs1, tvecs1 = cv2.calibrateCamera(obj_points, img_points1, (w, h), None, None) print "RMS:", rms print "camera matrix:\n", C1 print "distortion coefficients: ", dist1.ravel() cv2.destroyAllWindows() # CAM2 calibration if __name__ == '__main__': args, img_mask = getopt.getopt(sys.argv[1:], '', ['save=', 'debug=', 'square_size=']) args = dict(args) img_mask = '/home/essais-2015-3/Bureau/test_22_01/calib/droite/*.tif' img_names = sorted(glob(img_mask)) debug_dir = args.get('--debug') square_size = float(args.get('--square_size', 1)) pattern_size = (a,b) pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32 ) pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2) pattern_points *= square_size obj_points = [] img_points2 = [] h, w = 0, 0 for fn in img_names: print 'processing %s...' % fn, img = cv2.imread(fn, 0) h, w = img.shape[:2] found, corners = cv2.findChessboardCorners(img, pattern_size) if found: term = ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1 ) cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term) if debug_dir: vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) cv2.drawChessboardCorners(vis, pattern_size, corners, found) path, name, ext = splitfn(fn) cv2.imwrite('%s/%s_chess.bmp' % (debug_dir, name), vis) if not found: print 'chessboard not found' continue img_points2.append(corners.reshape(-1, 2)) obj_points.append(pattern_points) print 'ok' rms, C2, dist2, rvecs2, tvecs2= cv2.calibrateCamera(obj_points, img_points2, (w, h), None, None) print "RMS:", rms print "camera matrix:\n", C2 print "distortion coefficients: ", dist2.ravel() cv2.destroyAllWindows() # stereo calibration rms, C1, dist1, C2, dist2, R, T, E,F = cv2.stereoCalibrate(obj_points, img_points1, img_points2, C1, dist1, C2, dist2, (w,h))

Stereo calibrate, T Matrix.

$
0
0
I am running stereo calibrate on a dual camera setup. I get the T and R mats back, and I need to pass the baseline distance between the cameras into my stereo vision application. This should be the T values, is that correct? The issue is, my baseline is around 23 cm, and the T Mat is: 2.2127166167184534e+01 7.9245638499041940e-02 1.6681677038420595e+00 Am I looking at the wrong values? Or do i need to convert this somehow? Thanks.

Improve depth map quality

$
0
0
Hello all, I'm approaching to the ComputerVision and to OpenCV in these months. What I'm doing is create a stereo camera in order to obtain informations about an object (a cilindere more or less, with 4 cm of diameter and 8 cm height ) which should be manipulated by a robot. So through the depth map I would like to obtain the 3D position of the object. But first to do this, I would like to obtain a decent real time depth map. What I did is following the example contained in to the examples folder of Emgucv(i'm doing it in C#). there are few questions that I would to propose you: - How can I be sure that the calibration process was good or not - For computing the depth map I use the Semi Global Block Matching approach (StereoSGBM method), should I make some pre and/or post filtering to improve the quality of my depth map. Below, a picture of my actual depth map ![image description](/upfiles/1456927232873121.png) Thanks in advance
Viewing all 105 articles
Browse latest View live