matlab,computer-vision,coordinate-systems,camera-calibration,matlab-cvst

RotationOfCamera2 and TranslationOfCamera2 describe the transformation from camera1's coordinates into camera2's coordinates. A camera's coordinate system has its origin at the camera's optical center. Its X and Y-axes are in the image plane, and its Z-axis points out along the optical axis. Equivalently, the extrinsics of camera 1 are identity...

c++,matlab,image-processing,matlab-cvst

I came up with the following program to segment the regions and hopefully locate the pattern of interest using template matching. I've added some comments and figure titles to explain the flow and some resulting images. Hope it helps. im = imread('sample.png'); gr = rgb2gray(im); bw = im2bw(gr, graythresh(gr)); bwsm...

matlab,computer-vision,matlab-cvst,video-tracking

In your code, tracking_box output from blob_analyzer actually contains area of the blobs. When creating vision.BlobAnalysis you have enabled area output port in addition to centroid output. You seem to want only centroid output if you want to mark your objects. So you should only set CentroidOutputPort to true and...

matlab,computer-vision,matlab-cvst

Unfortunately, you cannot use handle graphics commands on a vision.VideoPlayer. However, there is a function insertShape, which lets you draw directly into the image, before you display it.

matlab,opencv,computer-vision,camera-calibration,matlab-cvst

The size of the board you should use depends on the distance to the camera. Ideally, you want to place the checkerboard at the same distance from the camera at which you want to do your measurements. At the same time you want to have enough data points to cover...

matlab,image-processing,computer-vision,video-processing,matlab-cvst

This problem is an active research area, and there are many possible approaches. One possibility is to train a classifier to distinguish a car from a truck. You can use this example showing how to classify digits using HOG features and an SVM classifier to get started.

matrix,3d,computer-vision,augmented-reality,matlab-cvst

You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code. Note that this code is released for research/evaluation...

matlab,computer-vision,matlab-cvst,feature-tracking

In the if statement you detect new points: if FrameCount==30 %If 30 frame have stepped though, find new feature points disp('help') points = detectMinEigenFeatures(rgb2gray(videoFrame),'MinQuality',0.04,'FilterSize',3); points = points.Location; FrameCount=0; end Now, inside that same if you have to tell the point tracker about those new points: setPoints(tracker, points); Otherwise, your variable...

matlab,user-interface,image-processing,matlab-guide,matlab-cvst

Unfortunately there is no way to use vision.VideoPlayer in a custom GUI directly. However here an example of how to play a video inside a custom GUI without using vision.VideoPlayer.

matlab,tracking,face-detection,face-recognition,matlab-cvst

Try this example, which uses the Viola-Jones face detection algorithm, and the KLT (Kanade-Lucas-Tomasi) algorithm for tracking.

matlab,image-processing,computer-vision,video-processing,matlab-cvst

I noticed a error you made indexing the images. BB has a variable size, thus you can't use it to linearise the indices. Instead of num2str(i+k*(size(BB,1))) I would use a counter which is incremented each iteration.

matlab,image-processing,ocr,matlab-cvst

You can dilate the image with a vertical line structuring element in order to vertically elongate the symbol and make it somewhat look more like a N. Eg: clear clc I=imread('N.jpg'); %// Line oriented at 90 degrees. SE = strel('line',4,90); I = imdilate(I,SE); imshow(I) r = ocr(I,'TextLayout','Word') Image: ahh now...

matlab,image-processing,video,matlab-cvst

VideoReader object does not have a snapshot method. It has a readFrame method. Alternatively, you can use the vision.VideoFileReader object and its step() method to read video frames. See this example....

matlab,image-processing,matlab-cvst,point-clouds,stereo-3d

For the 1600 x 3 sized reshaped A, you can use this - A(~any(isinf(A) | isnan(A),2),:) If the number of rows to be removed is a small number, you can directly remove them instead for a better performance - A(any(isinf(A) | isnan(A),2),:) = []; ...

matlab,computer-vision,bounding-box,matlab-cvst

You can use the insertMarker function in the Computer Vision System Toolbox to mark the centers of the bounding boxes.

matlab,image-processing,kinect,matlab-cvst

According to this thread depth map boundaries can be found based on the direction of estimated surface normals. To estimate the direction of the surface normals, you can [dzx dzy] = gradient( depth_map ); %// horizontal and vertical derivatives of depth map n = cat( 3, dzx, dzy, ones(size(dzx)) );...

image,matlab,image-processing,computer-vision,matlab-cvst

The error is a bit hard to understand, but I can explain what exactly it means. When you use the CVST Connected Components Labeller, it assumes that all of your images that you're going to use with the function are all the same size. That error happens because it looks...

matlab,opencv,matlab-cvst,haar-classifier,cascade-classifier

Yes, you can. If you look at the resulting xml file, you should see a comment at the top telling you which version of OpenCV it is compatible with.

matlab,image-processing,matlab-cvst

I guess your problem is that: file_name(1).name = . % Stands for current directory file_name(2).name = .. % Stands for parent directory file_name(3).name = your_file_name.jpg Now, do: images = dir('*JPG') for i=1:numel(images) file_name=dir(strcat('C:\Users\adminp\Desktop\dinosaurs\')); im=imread(strcat('C:\Users\adminp\Desktop\dinosaurs\',images(i).name)); %processing of read image end ...

matlab,machine-learning,computer-vision,svm,matlab-cvst

As lejlot correctly mentioned, SVM cannot be trained with variable length vectors. You can just normalize image size to one, i.e. 256x256. There are 3 possibilities to do this: Crop the 256x256 patch around center. Resize image to 256x256 discarding original aspect ratio. Resize image to 256xM where M <...

image,matlab,camera-calibration,matlab-cvst,distortion

If you are using one of the calibration images, then all the information you need is in the cameraParams object. Let's say you are using calibration image 1, and let's call it I. First, undistort the image: I = undistortImage(I, cameraParams); Get the extrinsics (rotation and translation) for your image:...

matlab,computer-vision,tracking,matlab-cvst,opticalflow

If your camera is moving, you would have to separate the camera motion (ego motion) from the motion of the objects. There are different ways of doing that. Here is a recent paper describing an approach using the orientations of optical flow vectors.

matlab,image-processing,matching,surf,matlab-cvst

To understand it further I tried the following code in this link. % Extract SURF features I = imread('cameraman.tif'); points = detectSURFFeatures(I); [features, valid_points] = extractFeatures(I, points); % Visualize 10 strongest SURF features, including their % scales and orientation which were determined during the % descriptor extraction process. imshow(I); hold...

matlab,image-processing,matlab-cvst,connected-components

The error output you are getting can be read from the bottom with a line of your code, and as you read the lines upwards it goes deeper into the call stack. So the top line gives the function that actually complained and the reason it gives. On this line...

matlab,image-processing,computer-vision,matlab-cvst

That's pretty easy to do. Once you detect the first shape, use the bounding box detected for the first object E, then insert a filled rectangle in that spot using insertShape. Make sure you set the Opacity to 1.0 so that it doesn't mix any pixels from the background into...

matlab,image-processing,computer-vision,detection,matlab-cvst

Just to clarify, are you working with a video? Is your camera stationary? In that case, you should be able to use vision.ForegroundDetector to detect anything that moves, and then use regionprops to select the blobs of the right size. If regionprops does not work for you, you may want...

matlab,image-processing,3d,matlab-cvst,stereo-3d

Try doing the following: [J1, J2] = rectifyStereoImages(I1, I2, stereoParams, 'OutputView', 'Full'); This way you will see the entire images. By default, rectifyStereoImages crops the output images to only contain the overlap between the two frames. In this case the overlap is very small compared to the disparity. What is...

matlab,computer-vision,matlab-cvst

I do not recommend using a KLT tracker for close CCTV cameras due to the following reasons: 1. CCTV frame rate is typically low, so people change their appearance significantly between frames 2. Since the camera is close to the people, they also change their appearance over time due to...

1) You have two overlapping bounding boxes. You compute the intersection of the boxes, which is the area of the overlap. You compute the uniton of the overlapping boxes, which is the some of the areas of the entrie boxes minus the area of the overlap. Then you divide the...

matlab,video,matlab-cvst,avi,split-screen

If it is just for playing the videos side-by-side, this simplker code will work, close all clc clear vid1 = vision.VideoFileReader('video1.avi'); vid2 = vision.VideoFileReader('video2.avi'); vidP = vision.VideoPlayer; while ~isDone(vid1) frame1 = step(vid1); frame2 = step(vid2); frame = horzcat(frame1, frame2); step(vidP,frame); end release(vid1); release(vid2); release(vidP); UPDATE: I'm assuming both input videos...

matlab,image-processing,computer-vision,ocr,matlab-cvst

There is ocr function in the Computer Vision System Toolbox.

matlab,computer-vision,matlab-cvst

The problem is that after thresholding frame is a logical array. To make the text show up use im2uint8 to convert it to uint8. A few other pointers: since you are working with a single image rather than with a video you can use imread instead of vision.VideoFileReader to read...

matlab,computer-vision,feature-detection,matlab-cvst

visionSupportPackages is not available in R2012, see here http://nl.mathworks.com/help/vision/release-notes.html

matlab,image-processing,computer-vision,matlab-cvst

I think the easiest and fastest way would be finding your target and binarize the image. Afterwards use regionprops() and read the "Orientation" property to read the orientation. If you can't use that toolbox the function is very easily implement by calculating the covariance matrix of your region. Let me...

matlab,matlab-cvst,bounding-box

You simply add the coordinates of the top-left corner of the cropped region to the top-left corners of the detected bounding boxes. Also, in the latest version of MATLAB vision.CascadeObjectDetector supports passing in the region of interest where you want to detect objects, so that you do not need to...

image-processing,computer-vision,simulink,matlab-cvst

Well Peter Corke provides a handy pair of Matlab/Simulink toolboxes for Robotic Control and Machine Vision. It might not be exactly what you need, but I have found them extremely useful.

matlab,opencv,computer-vision,camera-calibration,matlab-cvst

Your adviser is correct in that both MATLAB and OpenCV use essentially the same calibration algorithm. However, MATLAB uses the Levenberg-Marquardt non-linear least squares algorithm for the optimization (see documentation), whereas OpenCV uses gradient descent. I would guess that this accounts for most of the difference in the reprojection errors....

matlab,computer-vision,matlab-cvst

You are getting this warning because you have 'Fill' set to true for greenCircle. You can use 'FillColor' and 'CustomFillColor' to set the color of a filled circle. Also, if you have MATLAB version R2014a or later you can use the insertShape function instead of vision.ShapeInserter. The function is easier...

matlab,image-processing,matrix,computer-vision,matlab-cvst

By looking at the documentation, calling extractHOGFeatures computes a 1 x N vector given an input image. Because it can be a bit cumbersome to calculate what the output size of this may be, which also depends on what parameters you set up for the HOG detector, it's best to...

matlab,computer-vision,signal-processing,matlab-cvst,pose-estimation

Try transposing K. The K that you get from estimateCameraParameters assumes row-vectors post-multiplied by a matrix, while the K in most textbooks assumes column-vectors pre-multipied by a matrix.

matlab,machine-learning,computer-vision,classification,matlab-cvst

Look at Database Toolbox in Matlab. You could just save the classifier variable in a file: save('classifier.mat','classifier') And then load it before executing predict: load('classifier.mat') predictedLabels = predict(classifier, testFeatures); ...

matlab,computer-vision,camera-calibration,matlab-cvst

The Mathworks documentation on the Stereo Camera Calibration app does give specific advice on image formats: Use uncompressed images or lossless compression formats such as PNG. There's also a great deal more information on the details of what sort of images you need, under the "Image, Camera, and Pattern Preparation"...

matlab,image-processing,ocr,matlab-cvst

Because the image only contains a single character and the text is not formatted in a typical page format (dual column, single column, etc), you'll have to set the 'TextLayout' parameter to 'Word', and provide an input ROI: >> r = ocr(img,[91 89 22 37],'TextLayout','Word') r = ocrText with properties:...

matlab,opencv,computer-vision,camera-calibration,matlab-cvst

There are many possible sources of error. First of all, while all three of the calibration implementations you have tried use essentially the same algorithm, there are enough differences that explain the discrepancies in the results. The main difference is in the checkerboard corner detection. The Caltech Calibration Toolbox does...

matlab,matching,feature-detection,feature-extraction,matlab-cvst

The reason why I got the error massage mentioned above is because the page is for R2014a,but my MATLAB is R2012b,so it is a version problem.We just need to change the code like this: [fMatrix, epipolarInliers, status] = estimateFundamentalMatrix(matchedPoints1.Location, ... matchedPoints2.Location, 'Method', 'RANSAC', 'NumTrials', 10000, 'DistanceThreshold', ... 0.1, 'Confidence', 99.99);...

matlab,computer-vision,video-processing,matlab-cvst

clear all; close all; clc; VidObj=VideoReader('E:\workspace\mat2012b\video compression\original.mp4'); n=VidObj.NumberOfFrames; videoFReader = vision.VideoFileReader('original.mp4'); videoFWriter = vision.VideoFileWriter('vid_new_compressed_ffd5.avi',... 'AudioInputPort',1,'AudioDataType','int16','VideoCompressor','ffdshow video encoder','FileFormat','avi',... 'FrameRate',videoFReader.info.VideoFrameRate); [audio,fs]=audioread('original.mp4'); op=floor(fs/videoFReader.info.VideoFrameRate); for i=1:n videoFrame=...

matlab,computer-vision,camera-calibration,matlab-cvst

Here are the steps you need to do: Estimate the intrinsic parameters of the camera using a calibration target. You can user Matlab camera calibration toolbox, or http://www.vision.caltech.edu/bouguetj/calib_doc/ Take your time performing this step and make sure the calibration is correct. Calibration toolboxes will give you statistics on how good...

matlab,ocr,text-extraction,matlab-cvst,confidence-interval

The easiest way would be to create a logical index based on your threshold value: bestWordsIdx = ocrtxt.WordConfidence > 0.8; bestWords = ocrtxt.Words(bestWordsIdx) And same for Text: bestTextIdx = ocrtxt.CharacterConfidence > 0.8 bestText = ocrtxt.Text(bestTextIdx) ...

matlab,image-processing,computer-vision,matlab-cvst,bounding-box

Your problem actually isn't drawing the bounding box - it's locating the person inside the image, which you haven't quite done properly. If you don't do this correctly, then you won't be able place the correct bounding box around the person. This is what I have done to locate the...

matlab,computer-vision,matlab-cvst

If you google "gaussian blur Matlab" you'll get to the next page: http://uk.mathworks.com/help/images/ref/fspecial.html where you can understand how to blur: H = fspecial('gaussian',[5 5],0.5); blurred = imfilter(Image,H,'replicate'); If you just want to blur a part of the image, extract that part, blur it and then patch it again!...

image,matlab,computer-vision,face-detection,matlab-cvst

There are a few of things you can try: Definitely move FaceDetect = vision.CascadeObjectDetector; outside of the loop. You only need to create the face detector object once. Re-creating it for every frame is definitely your performance bottleneck. vision.VideoFileReader returns a frame of class 'single' by default. If you change...

matlab,matlab-cvst,3d-reconstruction,disparity-mapping

First question: which version of MATLAB are you using? Older releases have been using the simple block matching algorithm, which is not very robust. The latest release (R2014a) is using the Semi-Global Block Matching algorithm by default, which is much better. 'DisparityRange' depends on the distance from the camera to...

image,matlab,image-processing,matlab-cvst

Look specifically at the first part of your error: Error using images.internal.imageDisplayValidateParams>validateCData(line 119) >If input is logical (binary), it must be two-dimensional. shapeInserter is expecting that the input is a 2D binary image. However, because of your repmat call, your image is a 3D binary image instead. If the image...

matlab,geometry,computer-vision,matlab-cvst,projection-matrix

If the intrinsics are not known, the result is ambiguous up to a projective transformation. In other words, if you use estimateUncalibratedRectification to rectify a pair of images, and then compute disparity and do the 3D reconstruction, then you will reconstruct the 3D scene up to a projective transformation. Straight...

matlab,center,area,bounding-box,matlab-cvst

If you set AreaOutputPort and CentroidOutputPort to true, you will get three outputs instead of one. Instead of bbox = step(blobAnalysis, filteredForeground); use [areas, centroids, bbox] = step(blobAnalysis, filteredForeground); The way you currently have it bbox ends up being a 1-D array containing the areas, which is why insertShape throws...

image-processing,classification,bayesian,prediction,matlab-cvst

There are many possible approaches to a problem like this. One common method is the bag-of-features model. Take a look at this example using the Computer Vision System Toolbox in MATLAB.