I have already used the calibration code provided by opencv to calibrate my camera and everything went ok! I also can undistort any image I want by applying the parameters using a code written in Python.
import numpy as np
# copy parameters to arrays
K = np.array([[385.58130632872212, 0, 371.50000000000000], [0, 385.58130632872212, 236.50000000000000], [0, 0, 1]])
d = np.array([-0.27057628187805571, 0.10522881965331317, 0, 0, 0]) # just use first two terms (no translation)
# read one of your images
img = cv2.imread("C:\Users\ROS\Documents\Python\VCSBC2.jpg")
h, w = img.shape[:2]
newcamera, roi = cv2.getOptimalNewCameraMatrix(K, d, (w,h), 0)
newimg = cv2.undistort(img, K, d, None, newcamera)
But then the problem is that I do not really know how to undistort the images during a real time transmission. I am using a TCP/IP protocol to get the images from my camera, and I can run stuff from inside of it, but I have no idea of how I should be able to insert the matrix and the parameters there in order to get undistorted images in real time. Is there anyone who can give me a light about this?
Best How To :
Hard to give advice without seeing the code you use to pull images from the camera. Generally speaking, if your frame rate requirements are low enough, you could just grab the pixel buffers from the camera, copy them inside a cv image and apply undistort.
At higher frame rates cv undistort may prove too slow, since it computes a nonlinear transform at every pixel before the bilinear (or trilinear) interpolation step.
You then have two choices
Precompute warp maps. These are matrices that cache the above nonlinear computation (in two channels, separately horizontal and vertical directions), and reuse it for every frame. The OpenCV implementation of this approach is somewhat lame, since it requires transformation maps of the same size as the input image, which is wasteful when the distortion is everywhere smooth and moderate enough that one could get away with downsampling it. In these cases, for large enough images and frame rates, the lookups in a full-size map are wasteful and may become the bottleneck. If one rolls their own implementation of warp map with downsampling, care must be taken that the sample rate is high enough to guarantee correct distortion everywhere (especially at the image boundaries). This has typically the effect that the warp maps are "denser" than they need to be, as distortion at the image boundaries usually grows steeper than at the center. However, it is a simple and often "good enough" approach, and many professional applications use it extensively (e.g. Shake).
Use a non-uniform piecewise linear approximation. Here the idea is to subdivide the image canvas using a quadtree until the error in approximating the nonlinear warp in each quad, using the homography induced by warping just the vertices of the quad itself, is less than a threshold (e.g. 1/10 of a pixel). The advantage is that linearly warping a quad is fast and can be implemented inside the interpolation loop. With moderate enough distortion, this technique uses only few levels of quadtree, and using a straightforward implementation with a graphics library (e.g. OpenGL) it is quite easy to achieve high frame rates at large resolution. I personally used this technique starting about ten years ago, and could easily dewarp images at 60FPS at HD video resolution.