Imaging Components, Systems, and Processing

Correction of image radial distortion based on division model

[+] Author Affiliations
Fanlu Wu, Xiangjun Wang

Tianjin University, State Key Laboratory of Precision Measuring Technology and Instruments, 92 Weijin Road, Nankai District, Tianjin 300072, China

Tianjin University, Key Laboratory of MOEMS of the Ministry of Education, 92 Weijin Road, Nankai District, Tianjin 300072, China

Hong Wei

University of Reading, Department of Computer Science, Whiteknights, Reading RG6 6AY, United Kingdom

Opt. Eng. 56(1), 013108 (Jan 24, 2017). doi:10.1117/1.OE.56.1.013108
History: Received October 14, 2016; Accepted January 5, 2017
Text Size: A A A

Open Access Open Access

Abstract.  This paper presents an approach for estimating and then removing image radial distortion. It works on a single image and does not require a special calibration. The approach is extremely useful in many applications, particularly those where human-made environments contain abundant lines. A division model is applied, in which a straight line in the distorted image is treated as a circular arc. Levenberg–Marquardt (LM) iterative nonlinear least squares method is adopted to calculate the arc’s parameters. Then “Taubin fit” is applied to obtain the initial guess of the arc’s parameters which works as the initial input to the LM iteration. This dramatically improves the convergence rate in the LM process to obtain the required parameters for correcting image radial distortion. Hough entropy, as a measure, has achieved the quantitative evaluation of the estimated distortion based on the probability distribution in one-dimensional θ Hough space. The experimental results on both synthetic and real images have demonstrated that the proposed method can robustly estimate and then remove image radial distortion with high accuracy.

Figures in this Article

Lens distortion is usually classified into three types: radial distortion, decentering distortion, and thin prism distortion.14 In practice, for most lenses, the radial distortion component is predominant.5,6 It may appear as a barrel distortion or pincushion distortion. Radial distortion bends straight lines into circular arcs,6 violating the main invariance preserved in the pinhole camera model, in which straight lines in the world map to straight lines in the image plane.7 Radial distortion is the most significant type of distortion in today’s cameras.5,8

Methods used for obtaining the parameters in the radial distortion function for correcting the distorted images can be divided roughly into two major categories: multiple views method914 and single view method.6,7,15,16 For multiple views method, the most widely used offline calibration software is the toolbox provided by Jean-Yves Bouguet.17 It can process calibration after the image is imported with a lens distortion model that includes seven parameters which are sufficient for most kinds of cameras. Although the fact that the multiple views method does not require a special condition in the scene, for example, straight lines, making it have a wide range of applications, the disadvantage of this method is that it does require multiple images which are not available in many cases, to conduct the process.6 In the past decades, many methods for radial distortion estimation have been proposed in research.1826 Bukhari and Dailey21 proposed a method for automatic radial distortion estimation based on the plumb-line approach. They compared statistical analyses of how different circle fitting methods contribute to accurate distortion parameter estimation. They provide qualitative results on a wide variety of challenging real images. Alvarez et al.22,23 proposed an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. The lens distortion parameters are obtained by minimizing a four total-degree polynomial in several variables. Lens distortion is estimated by the division model using one parameter, which allows stating the problem into the Hough transform scheme by adding a distortion parameter to better extract straight lines from the image.2426 RGB-D cameras, such as the Microsoft Kinect, have become very widely used in perceptual computing applications. To utilize the full potential of RGB-D devices, calibration must be performed to determine the intrinsic and extrinsic parameters of the color and depth sensors and to reduce lens and depth distortion. Early work for calibration of RGB-D devices includes Herrera’s method27 and Smisek’s method.28

In this study, our method based on the use of distorted straight lines falls in the second category. The method works on a single image in which at least three distorted straight lines exist and does not require a calibration pattern. The Brown model2933 is most commonly used to describe lens distortion, and it works best for lens with small distortions. However, when the distortion becomes large, it may not be satisfactory. We use Fitzgibbon’s division model9 of radial distortion with a single parameter. The division model is capable of expressing large distortion at a much lower order. Hartley and Kang argued that the usual assumption that the distortion center is at the image center is not safe.13 Our method also computes the center of radial distortion, which is important in obtaining optimal results.

The rest of this paper is structured as follows. Section 2 describes the distortion model and the estimation of distortion parameters. In Sec. 3, a detailed quantitative study is presented of the performance evaluation on both synthetic and real images. Finally, this paper comes to conclusions in Sec. 4.

Distortion Models

The Brown model that is most commonly used to describe lens distortion can be written as Display Formula

{xu=(xdx0)(1+k1rd2+k2rd4+k3rd6+)+(1+p3rd2+){p1[rd2+2(xdx0)2]+2p2(xdx0)(ydy0)},yu=(ydy0)(1+k1rd2+k2rd4+k3rd6+)+(1+p3rd2+){p2[rd2+2(ydy0)2]+2p1(xdx0)(ydy0)},(1)
where (xu,yu) and (xd,yd) are the corresponding coordinates of an undistorted point and a distorted point in an image, respectively. rd=(xdx0)2+(ydy0)2 is the Euclidean distance of the distorted point to the distortion center (x0,y0).

According to Zhang,5 the radial distortion is predominant. The most commonly used radial distortion model can be written as Display Formula

{xu=xd(1+λ1rd2+λ2rd4+),yu=yd(1+λ1rd2+λ2rd4+),(2)
supposing that the distorted center (x0,y0) is the center of the image. This model works best for lenses with small distortions. However, when the distortion becomes large, it may not be satisfactory and many other factors have to be taken into account in practice.6

Fitzgibbon9 proposed the division model as Display Formula

{xu=xd1+λ1rd2+λ2rd4+,yu=yd1+λ1rd2+λ2rd4+.(3)

The most remarkable advantage of the division model over the Brown model is that it is able to express a large distortion at a much lower order. In particular, for many cameras, a single parameter would suffice.9,10 In our study, we use the single parameter division model Display Formula

{xu=xd1+λrd2,yu=yd1+λrd2.(4)

Distortion of a Straight Line

Under the single parameter division model, the distorted image of a straight line can be treated as a circular arc.6 The equation of a straight line is expressed as Display Formula

Axu+Byu+C=0.(5)

From Eq. (4), we have Display Formula

Axd1+λrd2+Byd1+λrd2+C=0,(6)
then, we obtain a circle equation Display Formula
xd2+yd2+ACλxd+BCλyd+1λ=0.(7)

If (x0,y0) is the center of radial distortion, we have Display Formula

(xdx0)2+(ydy0)2+ACλ(xdx0)+BCλ(ydy0)+1λ=0.(8)

Let D=ACλ2x0, E=BCλ2y0, F=x02+y02ACλx0BCλy0+1λ, then, we have Display Formula

xd2+yd2+Dxd+Eyd+F=0,(9)
and Display Formula
x02+y02+Dx0+Ey0+F1λ=0.(10)
Equation (9) indicates that a group of parameters (D,E,F) can be determined by fitting a circle to an arc which is extracted from the image. The circular arc in the image is projected from a straight line in the world. By extracting at least three arcs and determining three groups of parameters (D,E,F), the distortion center can be estimated by solving the linear equations of Display Formula
{(D1D2)x0+(E1E2)y0+(F1F2)=0,(D1D3)x0+(E1E3)y0+(F1F3)=0,(11)
and an estimate of λ can be obtained from Display Formula
λ=1x02+y02+Dx0+Ey0+F.(12)
When extracting more than three arcs from an image and determining these parameters (D,E,F), the parameter (x0,y0,λ) can be obtained based on the Levenberg–Marquardt (LM) scheme. Although the method requires at least three distorted lines residing in an image, it can cope with a situation in which fewer lines are found by adding more images taken by the same camera with different capturing angles. As long as there is a line involved in the scene, the method is applicable.

Method of Circle Fitting

To find arcs, we first extract edges using the Canny operator. Then we track all the edge points associated with a starting point. From a given starting point, we track in one direction, storing the coordinates of the edge points in an array and label the pixels in the edge image. When no more connected points are found, we return to the start point and track in the opposite direction. Finally, a check for the overall number of edge points found is made and the edge is ignored if it is too short.

After the initial arc identification process, an initial guess of parameters is assigned to each resulting arc, followed by the LM iterative nonlinear least squares method to produce the optimized parameters. Taubin fit is used for the initial guess.34 It uses four parameters to specify a circle: a(x2+y2)+bx+cy+d=0, with a0. The center of the circle is (b2a,c2a) and the radius is given by r=(b2a)2+(c2a)2da. It minimizes the objective function Ω(a,b,c,d)=i=1N(axi2+ayi2+bxi+cyi+d2), subject to the constraint that 4a2z+4abx+4acy+b2+c2=1, where x¯ is the mean of the points’ x coordinates, y is the mean of the points’ y coordinates, and z¯=1NΣi=1N(xi2+yi2). The objective function for LM fit35 is Ω(xc,yc,r)=i=1N(rir)2, where ri=(xixc)2+(yiyc)2.

Experiments were carried out on both synthetic and real image data. The performance evaluation of the proposed approach was conducted. We use Hough entropy to evaluate the quality of recovering distorted synthetic images. The Hough transform is a technique that can find lines in images.36 The basis of the technique is the transform of a line to a point in a Hough space. A line is represented by a single point in the two-dimensional (2-D) Hough space of ρ×θ, in which the values of these points vary. In our case, a specified threshold is set empirically as 0.3*max(Hough space) to obtain all peaks which represents lines. A straight line in the distorted image is treated as a circular arc and we only use the values of θ to measure the straightness. So transform the 2-D Hough space of ρ×θ to a one-dimensional (1-D) θ space by summing over the ρ values for each θ, then the Hough entropy is defined as Display Formula

H=b=1Binsp(Hb)log2[p(Hb)],(13)
where Bins is the number of θ discrete bins (we set Bins=180), and p(Hb) is the value of probability.

Tests on Synthetic Images

An image in size of 640×480  pixels, as shown in Fig. 1, was used as a source image (H=1). Synthetic images are generated from the source image with given information of the distortion parameters (x0,y0,λ). We performed three series of experiments with synthetic images.

Graphic Jump Location
Fig. 1
F1 :

Source image and the corresponding 1-D Hough transforms.

Varying λ

In the first series, synthetic images are obtained with distortion parameters (320,240,λ), with varying λ at different levels (from extreme pincushion to barrel distortion). For a positive λ (pincushion distortion), the size of synthetic images is larger than 640×480 pixels, and the distortion center is different from known parameters of (x0,y0,λ). For a negative λ (barrel distortion), the size of synthetic images is 640×480  pixels, and the distortion center is fixed at (320, 240). The synthetic images, corrected images, and the corresponding 1-D Hough transforms are shown in Fig. 2. For the extreme case of λ5.0×106, we only map the points for which rd21/(4λ), resulting in a circular valid region around the image center.

Graphic Jump Location
Fig. 2
F2 :

Correction of synthetic images (with different λ). (a) For positive λ. First column: distorted images at different levels of λ. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column. (b) For negative λ. First column: distorted images at different levels of λ. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column.

As shown in Fig. 2, the proposed method works very well for all distortion parameters in the test interval and remove image radial distortion with high accuracy. Estimated results of distortion parameters and Hough entropy of corrected images are shown in Table 1. The initial estimation only extracts three arcs which have maximum distortion, and the estimation based on the LM method extracts sixteen arcs which also have maximum distortion. Dis is the Euclidean distance between (x0,y0)true and (x0,y0)estimate. Rel is the relative error for λ, i.e., |(λestimateλtrue)/λtrue|. Dis, λestimate, and Rel can be found in Table 1.

Table Grahic Jump Location
Table 1Estimated results of the synthetic images from Fig. 2.

From Table 1, we can see that the proposed approach produces convincing distortion parameters which are very close to the true distortion parameters used for generating the synthetic images. This method is very robust even at extreme cases. Table 1 also shows that the LM method provides better estimates than the three arcs methods. Although the LM method may slightly increase Dis, it has dramatically reduced Rel. The results in terms of relative estimation error for λ in Table 1 show quite clearly that our method is extremely accurate at estimating λ, the estimated results of Rel based on the LM method at the range of 103 to 105. Rel increases when λ is close to zero, which reflects the following factor: since the true value is extremely small, small deviations between the estimated and true parameter values give relatively large errors. Table 1 shows quite clearly that the estimated results of Dis based on the LM method are less than 8 pixels. Furthermore in the case of |λ|6.0×107, Dis are less than 2 pixels. Hough entropy is always equal to 1 except for the extreme case of λ5.0×106. For this case, we only map the points for which rd21/(4λ) [see Fig. 2(a)]. θ is always equal to 0 (or 90) in the Hough space even for the extreme case of λ5.0×106, which shows that our method is robust. Real images may not contain too many distorted straight line. Fortunately, the results in Table 1 show quite clearly that the initial estimation only extracting three arcs which have satisfactory accuracy.

Varying distortion center

In the second series, synthetic images are obtained with the distortion fixed at a moderate level of barrel distortion (λ=1.0×6) while varying the distortion center. The synthetic images, corrected images, and the corresponding 1-D Hough transforms are presented in Fig. 3. Dis, λestimate, and Rel can be found in Table 2.

Graphic Jump Location
Fig. 3
F3 :

Correction of synthetic images (with different distortion center). First column: distorted images. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column.

Table Grahic Jump Location
Table 2Estimated results of the synthetic images from Fig. 3.

As shown in Fig. 3, the proposed method works very well for all distortion parameters in the test interval and removes image radial distortion with high accuracy. From Table 2, we can see that our method gives good results about the distortion parameter (x0,y0,λ). The results in terms of relative estimation error for λ in Table 2 show quite clearly that our method is extremely accurate at estimating λ, and for estimated results of Rel based on the LM method at the range of 104 to 106. The estimated results of Dis based on the LM method are less than 3 pixels. Hough entropy is always equal to 1, which shows that our method is robust.

Comparison to another technique

To gauge the accuracy of our method, we have compared it to the method developed by Alvarez et al.22,23 Alvarez et al. have deployed a demo web site37 for their method that allows users to submit an image for removing distortion after manually selecting distorted lines from it. For a fair comparison, the same three lines were used in both Alvarez’s method and our method. A synthetic image was generated from the source image with given information of distortion parameters (320,240,λ=1.0×106). The synthetic image, corrected images, and the corresponding 1-D Hough transforms are presented in Fig. 4. Dis, (x0,y0)estimate, and H can be found in Table 3. Compared to the source image (Fig. 1), the content of the corrected image [Fig. 4(b)] which generated from Alvarez’s method is zoomed out, and content of the corrected image [Fig. 4(c)] which is generated from the proposed method is unchanged. As shown in Fig. 4, the proposed method outperforms Alvarez’s method in terms of visual qualities. From Table 3, we can see quantitatively that the proposed method has dramatically reduced Dis, and the Hough entropy is equal to the source image. Moreover, compared to Alvarez’s method which requires manual intervention to select distorted straight lines, the proposed method requires much less processing time.

Graphic Jump Location
Fig. 4
F4 :

(a) Synthetic image and corresponding 1-D Hough transforms. (b) Corrected image of (a) which generated from Alvarez’s method and corresponding 1-D Hough transforms. (c) Corrected image of (a) which generated from the proposed method and corresponding 1-D Hough transforms.

Table Grahic Jump Location
Table 3Results comparison of the synthetic image from Fig. 4.
Tests on Real Images

The original tested real images and corrected images are shown in Fig. 5, the original image of (a)–(g) are obtained from the Image Processing on Line website.38 From Fig. 5, we can obviously observe the distortion (left in a pair) as well as correction (right in a pair). These results demonstrated that the radial distortion has been successfully removed in the recovered images. It shows the robustness and accuracy of the proposed approach in the radial distortion correction.

Graphic Jump Location
Fig. 5
F5 :

Correction of real images. Some lines that are straight in the world have been annotated with red straight lines in the corrected image, showing strong vanishing points. (a) (b) (c) (d) (g) are building, (e) bedroom, (f) solar power plant, and (h) the ceiling of corridor.

For the quantitative evaluation, we have compared the proposed method to Zhang’s method5 and Alvarez’s method. For Zhang’s method, it has to use a calibration pattern (checkerboard with black-and-white squares) to estimate the camera’s intrinsic parameters, therefore, its process takes a much longer time. For the comparison of Alvarez’s method and the proposed method, the same three distorted lines were taken for the purpose. The real image, corrected images, and the corresponding 1-D Hough transforms are presented in Fig. 6. As expected, the proposed method outperforms Zhang’s method and Alvarez’s method in terms of visual qualities. Compared to Zhang’s method which involves camera calibration and Alvarez’s method which requires manual intervention to select distorted straight lines, the proposed method is much faster in terms of processing time. The probability distribution in 1-D θ Hough space in Fig. 6(d) shows that our method is much more uniform at 0 deg and 180 deg compared to those in Figs. 6(b) and 6(c). This means that the proposed method has a more satisfactory result in removing image radial distortion.

Graphic Jump Location
Fig. 6
F6 :

(a) Real image and corresponding 1-D Hough transforms. (b) Corrected image of (a) which generated from Zhang’s method and corresponding 1-D Hough transforms. (c) Corrected image of (a) which generated from Alvarez’s method and corresponding 1-D Hough transforms. (d) Corrected image of (a) which generated from the proposed method and corresponding 1-D Hough transforms.

In this paper, we proposed an approach for correcting image radial distortion caused by lens. This method works on a single image and does not require a special calibration pattern. Experimental results have shown a significant achievement in correcting image radial distortion in both synthetic and real images. The key contributions of the study can be summarized in three aspects. (1) The proposed method is accurate and robust in estimating radial distortion. It is extremely useful in many applications, particularly for those where human-made environments contain abundant lines. Although the proposed method requires at least three distorted lines residing in an image, it can cope with a situation in which fewer lines are found by adding more images taken by a same camera with different capturing angles. As long as there is a line involved in the scene, the proposed method is applicable. (2) The quantitative evaluation of the estimated radial distortion parameters has been achieved by the defined measure of Hough entropy based on the probability distribution in 1-D θ Hough space. (3) The “Taubin fit” technique has shown its positive effect in the initial guess of an arc’s parameters. It has significantly improved the convergence rate in the process of the LM iterative nonlinear least squares method to calculate an arc’s parameters.

This work was supported by the National Natural Science Foundation of China (Grant No. 51575388).

Brito  J. H.  et al., “Radial distortion self-calibration,” in  Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ‘13) , pp. 1368 –1375 (2013).CrossRef
Bax  M. R., and Shahidi  R., “Real-time lens distortion correction: speed, accuracy and efficiency,” Opt. Eng.. 53, (11 ), 113103  (2014).CrossRef
Wang  J.  et al., “A new calibration model of camera lens distortion,” Pattern Recognit.. 41, (2 ), 607 –615 (2008). 0031-3203 CrossRef
Wu  F.  et al., “Deep space exploration panoramic camera calibration technique based on circular markers,” Acta Opt. Sin.. 33, (11 ), 1115002  (2013). 0253-2239 CrossRef
Zhang  Z., “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell.. 22, (11 ), 1330 –1334 (2000). 0162-8828 CrossRef
Wang  A., , Qiu  T., and Shao  L., “A simple method of radial distortion correction with centre of distortion estimation,” J. Math. Imaging Vision. 35, (3 ), 165 –172 (2009).CrossRef
Devernay  F., and Faugeras  O., “Straight lines have to be straight,” Mach. Vision Appl.. 13, (1 ), 14 –24 (2001).CrossRef
Kukelova  Z., and Pajdla  T., “A minimal solution to radial distortion autocalibration,” IEEE Trans. Pattern Anal. Mach. Intell.. 33, (12 ), 2410 –2422 (2011). 0162-8828 CrossRef
Fitzgibbon  A. W., “Simultaneous linear estimation of multiple view geometry and lens distortion,” in  Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01) , pp. 125 –132 (2001).CrossRef
Claus  D., and Fitzgibbon  A. W., “A rational function lens distortion model for general cameras,” in  IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’05) , pp. 213 –219 (2005).CrossRef
Stein  G. P., “Lens distortion calibration using point correspondences,” in  Proc., 1997 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition , pp. 602 –608 (1997).CrossRef
Micusik  B., and Pajdla  T., “Structure from motion with wide circular field of view cameras,” IEEE Trans. Pattern Anal. Mach. Intell.. 28, (7 ), 1135 –1149 (2006). 0162-8828 CrossRef
Hartley  R., and Kang  S. B., “Parameter-free radial distortion correction with center of distortion estimation,” IEEE Trans. Pattern Anal. Mach. Intell.. 29, (8 ), 1309 –1321 (2007). 0162-8828 CrossRef
Ramalingam  S., , Sturm  P., and Lodha  S. K., “Generic self-calibration of central cameras,” Comput. Vision Image Understanding. 114, (2 ), 210 –219 (2010). 1077-3142 CrossRef
Precott  B., and McLean  G. F., “Line-based correction of radial lens distortion,” Graphical Models Image Process.. 59, (1 ), 39 –47 (1997).CrossRef
Ahmed  M., and Farag  A., “Nonmetric calibration of camera lens distortion: differential methods and robust estimation,” IEEE Trans. Image Process.. 14, (8 ), 1215 –1230 (2005). 1057-7149 CrossRef
Jean-Yves Bouguet, “Camera calibration toolbox for MATLAB,” 14  October  2015, http://www.vision.caltech.edu/bouguetj/calib_doc/ (24  March  2016).
Li  D., , Wen  G., and Qiu  S., “Cross-ratio-based line scan camera calibration using a planar pattern,” Opt. Eng.. 55, (1 ), 014104  (2016).CrossRef
Alanis  F. C. M., and Rodriguez  J. A. M., “Self-calibration of vision parameters via genetic algorithms with simulated binary crossover and laser line projection,” Opt. Eng.. 54, (5 ), 053115  (2015).CrossRef
Barreto  J. P., “A unifying geometric representation for central projection systems,” Comput. Vision Image Understanding. 103, (3 ), 208 –217 (2006). 1077-3142 CrossRef
Bukhari  F., and Dailey  M. N., “Automatic radial distortion estimation from a single image,” J. Math. Imaging Vision. 45, (1 ), 31 –45 (2013).CrossRef
Alvarez  L.  et al., “An algebraic approach to lens distortion by line rectification,” J. Math. Imaging Vision. 35, (1 ), 36 –50 (2009).CrossRef
Alvarez  L., , Gomez  L., and Sendra  J. R., “Algebraic lens distortion model estimation,” Image Process. On Line. 1, , 1 –10 (2010).CrossRef
Aleman-Flores  M.  et al., “Automatic lens distortion correction using one-parameter division models,” Image Process. On Line. 4, , 327 –343 (2014).CrossRef
Aleman-Flores  M.  et al., “Line detection in images showing significant lens distortion and application to distortion correction,” Pattern Recognit. Lett.. 36, , 261 –271 (2014). 0167-8655 CrossRef
Santana-Cedres  D.  et al., “Invertibility and estimation of two-parameter polynomial and division lens distortion models,” SIAM J. Imaging Sci.. 8, (3 ), 1574 –1606 (2015).CrossRef
Herrera  C., , Kannala  J., and Heikkila  J., “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Anal. Mach. Intell.. 34, (10 ), 2058 –2064 (2012). 0162-8828 CrossRef
Smisek  J., , Jancosek  M., and Pajdla  T., “3D with Kinect,” in  IEEE Int. Conf. on Computer Vision Workshops (ICCV Workshops ‘11) , pp. 1154 –1160 (2011).CrossRef
Brown  D. C., “Decentering distortion of lenses,” Photometric Eng.. 32, (3 ), 444 –462 (1966).
Brown  D. C., “Close-range camera calibration,” Photogramm. Eng.. 37, (8 ), 855 –866 (1971).
Fryer  J. G., and Brown  D. C., “Lens distortion for close-range photogrammetry,” Photogramm. Eng. Remote Sens.. 52, (1 ), 51 –58 (1986).
Tsai  R. Y., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Rob. Autom.. 3, (4 ), 323 –344 (1987). 0882-4967 CrossRef
Clarke  T. A., and Fryer  J. G., “The development of camera calibration methods and models,” Photogrammetric Rec.. 16, (91 ), 51 –66 (1998).CrossRef
Taubin  G., “Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.. 13, (11 ), 1115 –1138 (1991). 0162-8828 CrossRef
Chernov  N., Circular and Linear Regression: Fitting Circles and Lines by Least Squares. ,  Chapman & Hall/CRC ,  Boca Raton, Florida  (2010).
Duda  R. O., and Hart  P. E., “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM. 15, (1 ), 11 –15 (1972). 0001-0782 CrossRef
Alvarez  L., , Gómez  L., and Sendra  J. R., “Algebraic lens distortion model estimation,” 29  September  2015, http://demo.ipol.im/demo/ags_algebraic_lens_distortion_estimation/ (25  March  2016).
Image Processing On Line, http://www.ipol.im/pub/art/ (15  April  2016).

Fanlu Wu received his BS degree from the School of Opto-Electronic Engineering, Changchun University of Science and Technology, China, in 2011 and his MS degree from the University of the Chinese Academy of Sciences, China, in 2014. He is currently pursuing his PhD in the School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, China. His research interests include camera calibration, image mosaics, and image super-resolution reconstruction.

Hong Wei received her PhD from Birmingham University in 1996. She worked as a postdoctoral research assistant on a Hewlett Packard sponsored project, high-resolution CMOS camera systems. She also worked as a research fellow on an EPSRC-funded Faraday project, model from movies. She joined the University of Reading in 2000. Her current research interest includes intelligent computer vision and its applications in remotely sensed images and face recognition (biometric).

Xiangjun Wang received his BS, MS, and PhD degrees in precision measurement technology and instruments from Tianjin University, China, in 1980, 1985, and 1990, respectively. Currently, he is a professor and director of the precision measurement system research group at Tianjin University. His research interests include photoelectric sensors and testing, computer vision, image analysis, MOEMS, and MEMS.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Fanlu Wu ; Hong Wei and Xiangjun Wang
"Correction of image radial distortion based on division model", Opt. Eng. 56(1), 013108 (Jan 24, 2017). ; http://dx.doi.org/10.1117/1.OE.56.1.013108


Figures

Graphic Jump Location
Fig. 1
F1 :

Source image and the corresponding 1-D Hough transforms.

Graphic Jump Location
Fig. 2
F2 :

Correction of synthetic images (with different λ). (a) For positive λ. First column: distorted images at different levels of λ. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column. (b) For negative λ. First column: distorted images at different levels of λ. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column.

Graphic Jump Location
Fig. 3
F3 :

Correction of synthetic images (with different distortion center). First column: distorted images. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column.

Graphic Jump Location
Fig. 4
F4 :

(a) Synthetic image and corresponding 1-D Hough transforms. (b) Corrected image of (a) which generated from Alvarez’s method and corresponding 1-D Hough transforms. (c) Corrected image of (a) which generated from the proposed method and corresponding 1-D Hough transforms.

Graphic Jump Location
Fig. 5
F5 :

Correction of real images. Some lines that are straight in the world have been annotated with red straight lines in the corrected image, showing strong vanishing points. (a) (b) (c) (d) (g) are building, (e) bedroom, (f) solar power plant, and (h) the ceiling of corridor.

Graphic Jump Location
Fig. 6
F6 :

(a) Real image and corresponding 1-D Hough transforms. (b) Corrected image of (a) which generated from Zhang’s method and corresponding 1-D Hough transforms. (c) Corrected image of (a) which generated from Alvarez’s method and corresponding 1-D Hough transforms. (d) Corrected image of (a) which generated from the proposed method and corresponding 1-D Hough transforms.

Tables

Table Grahic Jump Location
Table 1Estimated results of the synthetic images from Fig. 2.
Table Grahic Jump Location
Table 2Estimated results of the synthetic images from Fig. 3.
Table Grahic Jump Location
Table 3Results comparison of the synthetic image from Fig. 4.

References

Brito  J. H.  et al., “Radial distortion self-calibration,” in  Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ‘13) , pp. 1368 –1375 (2013).CrossRef
Bax  M. R., and Shahidi  R., “Real-time lens distortion correction: speed, accuracy and efficiency,” Opt. Eng.. 53, (11 ), 113103  (2014).CrossRef
Wang  J.  et al., “A new calibration model of camera lens distortion,” Pattern Recognit.. 41, (2 ), 607 –615 (2008). 0031-3203 CrossRef
Wu  F.  et al., “Deep space exploration panoramic camera calibration technique based on circular markers,” Acta Opt. Sin.. 33, (11 ), 1115002  (2013). 0253-2239 CrossRef
Zhang  Z., “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell.. 22, (11 ), 1330 –1334 (2000). 0162-8828 CrossRef
Wang  A., , Qiu  T., and Shao  L., “A simple method of radial distortion correction with centre of distortion estimation,” J. Math. Imaging Vision. 35, (3 ), 165 –172 (2009).CrossRef
Devernay  F., and Faugeras  O., “Straight lines have to be straight,” Mach. Vision Appl.. 13, (1 ), 14 –24 (2001).CrossRef
Kukelova  Z., and Pajdla  T., “A minimal solution to radial distortion autocalibration,” IEEE Trans. Pattern Anal. Mach. Intell.. 33, (12 ), 2410 –2422 (2011). 0162-8828 CrossRef
Fitzgibbon  A. W., “Simultaneous linear estimation of multiple view geometry and lens distortion,” in  Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01) , pp. 125 –132 (2001).CrossRef
Claus  D., and Fitzgibbon  A. W., “A rational function lens distortion model for general cameras,” in  IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’05) , pp. 213 –219 (2005).CrossRef
Stein  G. P., “Lens distortion calibration using point correspondences,” in  Proc., 1997 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition , pp. 602 –608 (1997).CrossRef
Micusik  B., and Pajdla  T., “Structure from motion with wide circular field of view cameras,” IEEE Trans. Pattern Anal. Mach. Intell.. 28, (7 ), 1135 –1149 (2006). 0162-8828 CrossRef
Hartley  R., and Kang  S. B., “Parameter-free radial distortion correction with center of distortion estimation,” IEEE Trans. Pattern Anal. Mach. Intell.. 29, (8 ), 1309 –1321 (2007). 0162-8828 CrossRef
Ramalingam  S., , Sturm  P., and Lodha  S. K., “Generic self-calibration of central cameras,” Comput. Vision Image Understanding. 114, (2 ), 210 –219 (2010). 1077-3142 CrossRef
Precott  B., and McLean  G. F., “Line-based correction of radial lens distortion,” Graphical Models Image Process.. 59, (1 ), 39 –47 (1997).CrossRef
Ahmed  M., and Farag  A., “Nonmetric calibration of camera lens distortion: differential methods and robust estimation,” IEEE Trans. Image Process.. 14, (8 ), 1215 –1230 (2005). 1057-7149 CrossRef
Jean-Yves Bouguet, “Camera calibration toolbox for MATLAB,” 14  October  2015, http://www.vision.caltech.edu/bouguetj/calib_doc/ (24  March  2016).
Li  D., , Wen  G., and Qiu  S., “Cross-ratio-based line scan camera calibration using a planar pattern,” Opt. Eng.. 55, (1 ), 014104  (2016).CrossRef
Alanis  F. C. M., and Rodriguez  J. A. M., “Self-calibration of vision parameters via genetic algorithms with simulated binary crossover and laser line projection,” Opt. Eng.. 54, (5 ), 053115  (2015).CrossRef
Barreto  J. P., “A unifying geometric representation for central projection systems,” Comput. Vision Image Understanding. 103, (3 ), 208 –217 (2006). 1077-3142 CrossRef
Bukhari  F., and Dailey  M. N., “Automatic radial distortion estimation from a single image,” J. Math. Imaging Vision. 45, (1 ), 31 –45 (2013).CrossRef
Alvarez  L.  et al., “An algebraic approach to lens distortion by line rectification,” J. Math. Imaging Vision. 35, (1 ), 36 –50 (2009).CrossRef
Alvarez  L., , Gomez  L., and Sendra  J. R., “Algebraic lens distortion model estimation,” Image Process. On Line. 1, , 1 –10 (2010).CrossRef
Aleman-Flores  M.  et al., “Automatic lens distortion correction using one-parameter division models,” Image Process. On Line. 4, , 327 –343 (2014).CrossRef
Aleman-Flores  M.  et al., “Line detection in images showing significant lens distortion and application to distortion correction,” Pattern Recognit. Lett.. 36, , 261 –271 (2014). 0167-8655 CrossRef
Santana-Cedres  D.  et al., “Invertibility and estimation of two-parameter polynomial and division lens distortion models,” SIAM J. Imaging Sci.. 8, (3 ), 1574 –1606 (2015).CrossRef
Herrera  C., , Kannala  J., and Heikkila  J., “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Anal. Mach. Intell.. 34, (10 ), 2058 –2064 (2012). 0162-8828 CrossRef
Smisek  J., , Jancosek  M., and Pajdla  T., “3D with Kinect,” in  IEEE Int. Conf. on Computer Vision Workshops (ICCV Workshops ‘11) , pp. 1154 –1160 (2011).CrossRef
Brown  D. C., “Decentering distortion of lenses,” Photometric Eng.. 32, (3 ), 444 –462 (1966).
Brown  D. C., “Close-range camera calibration,” Photogramm. Eng.. 37, (8 ), 855 –866 (1971).
Fryer  J. G., and Brown  D. C., “Lens distortion for close-range photogrammetry,” Photogramm. Eng. Remote Sens.. 52, (1 ), 51 –58 (1986).
Tsai  R. Y., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Rob. Autom.. 3, (4 ), 323 –344 (1987). 0882-4967 CrossRef
Clarke  T. A., and Fryer  J. G., “The development of camera calibration methods and models,” Photogrammetric Rec.. 16, (91 ), 51 –66 (1998).CrossRef
Taubin  G., “Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell.. 13, (11 ), 1115 –1138 (1991). 0162-8828 CrossRef
Chernov  N., Circular and Linear Regression: Fitting Circles and Lines by Least Squares. ,  Chapman & Hall/CRC ,  Boca Raton, Florida  (2010).
Duda  R. O., and Hart  P. E., “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM. 15, (1 ), 11 –15 (1972). 0001-0782 CrossRef
Alvarez  L., , Gómez  L., and Sendra  J. R., “Algebraic lens distortion model estimation,” 29  September  2015, http://demo.ipol.im/demo/ags_algebraic_lens_distortion_estimation/ (25  March  2016).
Image Processing On Line, http://www.ipol.im/pub/art/ (15  April  2016).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Geodesic Video Stabilization in Transformation Space. IEEE Trans Image Process Published online Mar 01, 2017;
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.