Open Access
24 January 2017 Correction of image radial distortion based on division model
Fanlu Wu, Hong Wei, Xiangjun Wang
Author Affiliations +
Abstract
This paper presents an approach for estimating and then removing image radial distortion. It works on a single image and does not require a special calibration. The approach is extremely useful in many applications, particularly those where human-made environments contain abundant lines. A division model is applied, in which a straight line in the distorted image is treated as a circular arc. Levenberg–Marquardt (LM) iterative nonlinear least squares method is adopted to calculate the arc’s parameters. Then “Taubin fit” is applied to obtain the initial guess of the arc’s parameters which works as the initial input to the LM iteration. This dramatically improves the convergence rate in the LM process to obtain the required parameters for correcting image radial distortion. Hough entropy, as a measure, has achieved the quantitative evaluation of the estimated distortion based on the probability distribution in one-dimensional θ Hough space. The experimental results on both synthetic and real images have demonstrated that the proposed method can robustly estimate and then remove image radial distortion with high accuracy.

1.

Introduction

Lens distortion is usually classified into three types: radial distortion, decentering distortion, and thin prism distortion.14 In practice, for most lenses, the radial distortion component is predominant.5,6 It may appear as a barrel distortion or pincushion distortion. Radial distortion bends straight lines into circular arcs,6 violating the main invariance preserved in the pinhole camera model, in which straight lines in the world map to straight lines in the image plane.7 Radial distortion is the most significant type of distortion in today’s cameras.5,8

Methods used for obtaining the parameters in the radial distortion function for correcting the distorted images can be divided roughly into two major categories: multiple views method914 and single view method.6,7,15,16 For multiple views method, the most widely used offline calibration software is the toolbox provided by Jean-Yves Bouguet.17 It can process calibration after the image is imported with a lens distortion model that includes seven parameters which are sufficient for most kinds of cameras. Although the fact that the multiple views method does not require a special condition in the scene, for example, straight lines, making it have a wide range of applications, the disadvantage of this method is that it does require multiple images which are not available in many cases, to conduct the process.6 In the past decades, many methods for radial distortion estimation have been proposed in research.1826 Bukhari and Dailey21 proposed a method for automatic radial distortion estimation based on the plumb-line approach. They compared statistical analyses of how different circle fitting methods contribute to accurate distortion parameter estimation. They provide qualitative results on a wide variety of challenging real images. Alvarez et al.22,23 proposed an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. The lens distortion parameters are obtained by minimizing a four total-degree polynomial in several variables. Lens distortion is estimated by the division model using one parameter, which allows stating the problem into the Hough transform scheme by adding a distortion parameter to better extract straight lines from the image.2426 RGB-D cameras, such as the Microsoft Kinect, have become very widely used in perceptual computing applications. To utilize the full potential of RGB-D devices, calibration must be performed to determine the intrinsic and extrinsic parameters of the color and depth sensors and to reduce lens and depth distortion. Early work for calibration of RGB-D devices includes Herrera’s method27 and Smisek’s method.28

In this study, our method based on the use of distorted straight lines falls in the second category. The method works on a single image in which at least three distorted straight lines exist and does not require a calibration pattern. The Brown model2933 is most commonly used to describe lens distortion, and it works best for lens with small distortions. However, when the distortion becomes large, it may not be satisfactory. We use Fitzgibbon’s division model9 of radial distortion with a single parameter. The division model is capable of expressing large distortion at a much lower order. Hartley and Kang argued that the usual assumption that the distortion center is at the image center is not safe.13 Our method also computes the center of radial distortion, which is important in obtaining optimal results.

The rest of this paper is structured as follows. Section 2 describes the distortion model and the estimation of distortion parameters. In Sec. 3, a detailed quantitative study is presented of the performance evaluation on both synthetic and real images. Finally, this paper comes to conclusions in Sec. 4.

2.

Methodology

2.1.

Distortion Models

The Brown model that is most commonly used to describe lens distortion can be written as

Eq. (1)

{xu=(xdx0)(1+k1rd2+k2rd4+k3rd6+)+(1+p3rd2+){p1[rd2+2(xdx0)2]+2p2(xdx0)(ydy0)},yu=(ydy0)(1+k1rd2+k2rd4+k3rd6+)+(1+p3rd2+){p2[rd2+2(ydy0)2]+2p1(xdx0)(ydy0)},
where (xu,yu) and (xd,yd) are the corresponding coordinates of an undistorted point and a distorted point in an image, respectively. rd=(xdx0)2+(ydy0)2 is the Euclidean distance of the distorted point to the distortion center (x0,y0).

According to Zhang,5 the radial distortion is predominant. The most commonly used radial distortion model can be written as

Eq. (2)

{xu=xd(1+λ1rd2+λ2rd4+),yu=yd(1+λ1rd2+λ2rd4+),
supposing that the distorted center (x0,y0) is the center of the image. This model works best for lenses with small distortions. However, when the distortion becomes large, it may not be satisfactory and many other factors have to be taken into account in practice.6

Fitzgibbon9 proposed the division model as

Eq. (3)

{xu=xd1+λ1rd2+λ2rd4+,yu=yd1+λ1rd2+λ2rd4+.

The most remarkable advantage of the division model over the Brown model is that it is able to express a large distortion at a much lower order. In particular, for many cameras, a single parameter would suffice.9,10 In our study, we use the single parameter division model

Eq. (4)

{xu=xd1+λrd2,yu=yd1+λrd2.

2.2.

Distortion of a Straight Line

Under the single parameter division model, the distorted image of a straight line can be treated as a circular arc.6 The equation of a straight line is expressed as

Eq. (5)

Axu+Byu+C=0.

From Eq. (4), we have

Eq. (6)

Axd1+λrd2+Byd1+λrd2+C=0,
then, we obtain a circle equation

Eq. (7)

xd2+yd2+ACλxd+BCλyd+1λ=0.

If (x0,y0) is the center of radial distortion, we have

Eq. (8)

(xdx0)2+(ydy0)2+ACλ(xdx0)+BCλ(ydy0)+1λ=0.

Let D=ACλ2x0, E=BCλ2y0, F=x02+y02ACλx0BCλy0+1λ, then, we have

Eq. (9)

xd2+yd2+Dxd+Eyd+F=0,
and

Eq. (10)

x02+y02+Dx0+Ey0+F1λ=0.
Equation (9) indicates that a group of parameters (D,E,F) can be determined by fitting a circle to an arc which is extracted from the image. The circular arc in the image is projected from a straight line in the world. By extracting at least three arcs and determining three groups of parameters (D,E,F), the distortion center can be estimated by solving the linear equations of

Eq. (11)

{(D1D2)x0+(E1E2)y0+(F1F2)=0,(D1D3)x0+(E1E3)y0+(F1F3)=0,
and an estimate of λ can be obtained from

Eq. (12)

λ=1x02+y02+Dx0+Ey0+F.
When extracting more than three arcs from an image and determining these parameters (D,E,F), the parameter (x0,y0,λ) can be obtained based on the Levenberg–Marquardt (LM) scheme. Although the method requires at least three distorted lines residing in an image, it can cope with a situation in which fewer lines are found by adding more images taken by the same camera with different capturing angles. As long as there is a line involved in the scene, the method is applicable.

2.3.

Method of Circle Fitting

To find arcs, we first extract edges using the Canny operator. Then we track all the edge points associated with a starting point. From a given starting point, we track in one direction, storing the coordinates of the edge points in an array and label the pixels in the edge image. When no more connected points are found, we return to the start point and track in the opposite direction. Finally, a check for the overall number of edge points found is made and the edge is ignored if it is too short.

After the initial arc identification process, an initial guess of parameters is assigned to each resulting arc, followed by the LM iterative nonlinear least squares method to produce the optimized parameters. Taubin fit is used for the initial guess.34 It uses four parameters to specify a circle: a(x2+y2)+bx+cy+d=0, with a0. The center of the circle is (b2a,c2a) and the radius is given by r=(b2a)2+(c2a)2da. It minimizes the objective function Ω(a,b,c,d)=i=1N(axi2+ayi2+bxi+cyi+d2), subject to the constraint that 4a2z+4abx+4acy+b2+c2=1, where x¯ is the mean of the points’ x coordinates, y is the mean of the points’ y coordinates, and z¯=1NΣi=1N(xi2+yi2). The objective function for LM fit35 is Ω(xc,yc,r)=i=1N(rir)2, where ri=(xixc)2+(yiyc)2.

3.

Result and Discussion

Experiments were carried out on both synthetic and real image data. The performance evaluation of the proposed approach was conducted. We use Hough entropy to evaluate the quality of recovering distorted synthetic images. The Hough transform is a technique that can find lines in images.36 The basis of the technique is the transform of a line to a point in a Hough space. A line is represented by a single point in the two-dimensional (2-D) Hough space of ρ×θ, in which the values of these points vary. In our case, a specified threshold is set empirically as 0.3*max(Hough space) to obtain all peaks which represents lines. A straight line in the distorted image is treated as a circular arc and we only use the values of θ to measure the straightness. So transform the 2-D Hough space of ρ×θ to a one-dimensional (1-D) θ space by summing over the ρ values for each θ, then the Hough entropy is defined as

Eq. (13)

H=b=1Binsp(Hb)log2[p(Hb)],
where Bins is the number of θ discrete bins (we set Bins=180), and p(Hb) is the value of probability.

3.1.

Tests on Synthetic Images

An image in size of 640×480  pixels, as shown in Fig. 1, was used as a source image (H=1). Synthetic images are generated from the source image with given information of the distortion parameters (x0,y0,λ). We performed three series of experiments with synthetic images.

Fig. 1

Source image and the corresponding 1-D Hough transforms.

OE_56_1_013108_f001.png

3.1.1.

Varying λ

In the first series, synthetic images are obtained with distortion parameters (320,240,λ), with varying λ at different levels (from extreme pincushion to barrel distortion). For a positive λ (pincushion distortion), the size of synthetic images is larger than 640×480 pixels, and the distortion center is different from known parameters of (x0,y0,λ). For a negative λ (barrel distortion), the size of synthetic images is 640×480  pixels, and the distortion center is fixed at (320, 240). The synthetic images, corrected images, and the corresponding 1-D Hough transforms are shown in Fig. 2. For the extreme case of λ5.0×106, we only map the points for which rd21/(4λ), resulting in a circular valid region around the image center.

Fig. 2

Correction of synthetic images (with different λ). (a) For positive λ. First column: distorted images at different levels of λ. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column. (b) For negative λ. First column: distorted images at different levels of λ. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column.

OE_56_1_013108_f002.png

As shown in Fig. 2, the proposed method works very well for all distortion parameters in the test interval and remove image radial distortion with high accuracy. Estimated results of distortion parameters and Hough entropy of corrected images are shown in Table 1. The initial estimation only extracts three arcs which have maximum distortion, and the estimation based on the LM method extracts sixteen arcs which also have maximum distortion. Dis is the Euclidean distance between (x0,y0)true and (x0,y0)estimate. Rel is the relative error for λ, i.e., |(λestimateλtrue)/λtrue|. Dis, λestimate, and Rel can be found in Table 1.

Table 1

Estimated results of the synthetic images from Fig. 2.

λtrueInitial estimateEstimate by LM methodH
λestimateRelDisλestimateRelDis
1.0×1050.999777183×1052.2817×1040.67040.99993718×1056.282×1050.72080.9911
5.0×1065.0006619×1061.3237×1040.71855.0002084×1064.169×1050.70430.9710
1.0×1061.0007869×1067.8688×1040.64290.9996487×1063.5131×1041.11891
8.0×1077.978815×1072.64816×1030.90118.002692×1073.3644×1040.97051
6.0×1076.007668×1071.27806×1031.46146.002536×1074.2261×1040.65081
4.0×1074.025900×1076.47499×1034.22593.999954×1071.146×1053.01411
2.0×1072.068505×1073.425237×1023.94621.995506×1072.24694×1036.47861
2.0×1072.056167×1072.808350×10211.79072.011946×1075.97300×1037.38331
4.0×1073.951756×1071.206111×1026.55353.966594×1078.35147×1031.92501
6.0×1076.062686×1071.044762×1023.86446.025610×1074.26833×1031.67501
8.0×1077.919760×1071.003003×1023.44958.008144×1071.01804×1031.16571
1.0×1061.0014953×1061.49534×1030.88990.9995671×1064.3291×1040.79461
5.0×1064.9972071×1065.5859×1041.08595.0008469×1061.6937×1040.94391
1.0×1050.99546012×1054.53988×1030.37831.00022696×1052.2696×1040.56541

From Table 1, we can see that the proposed approach produces convincing distortion parameters which are very close to the true distortion parameters used for generating the synthetic images. This method is very robust even at extreme cases. Table 1 also shows that the LM method provides better estimates than the three arcs methods. Although the LM method may slightly increase Dis, it has dramatically reduced Rel. The results in terms of relative estimation error for λ in Table 1 show quite clearly that our method is extremely accurate at estimating λ, the estimated results of Rel based on the LM method at the range of 103 to 105. Rel increases when λ is close to zero, which reflects the following factor: since the true value is extremely small, small deviations between the estimated and true parameter values give relatively large errors. Table 1 shows quite clearly that the estimated results of Dis based on the LM method are less than 8 pixels. Furthermore in the case of |λ|6.0×107, Dis are less than 2 pixels. Hough entropy is always equal to 1 except for the extreme case of λ5.0×106. For this case, we only map the points for which rd21/(4λ) [see Fig. 2(a)]. θ is always equal to 0 (or 90) in the Hough space even for the extreme case of λ5.0×106, which shows that our method is robust. Real images may not contain too many distorted straight line. Fortunately, the results in Table 1 show quite clearly that the initial estimation only extracting three arcs which have satisfactory accuracy.

3.1.2.

Varying distortion center

In the second series, synthetic images are obtained with the distortion fixed at a moderate level of barrel distortion (λ=1.0×6) while varying the distortion center. The synthetic images, corrected images, and the corresponding 1-D Hough transforms are presented in Fig. 3. Dis, λestimate, and Rel can be found in Table 2.

Fig. 3

Correction of synthetic images (with different distortion center). First column: distorted images. Second column: corresponding 1-D Hough transforms of first column. Third column: corrected images of first column. Fourth column: corresponding 1-D Hough transforms of third column.

OE_56_1_013108_f003.png

Table 2

Estimated results of the synthetic images from Fig. 3.

(x0,y0)trueInitial estimateEstimate by LM methodH
λestimateRel(x0,y0)estimateDisλestimateRel(x0,y0)estimateDis
(300, 220)0.9993661×1066.3387×104(302.0406, 220.5043)2.10190.9997905×1062.0946×104(301.1228, 219.5050)1.22711
(300, 260)0.9997047×1062.9529×104(301.7350, 259.7820)1.74861.0000697×1066.974×105(300.9076, 259.0131)1.34081
(340, 220)1.0040027×1064.00267×103(337.1097, 219.6016)2.91761.0002655×1062.6555×104(338.4051, 219.1870)1.79021
(340, 260)1.0036289×1063.62891×103(337.0461, 258.7703)3.19970.9997154×1062.8457×104(337.7498, 259.1804)2.39481
(240, 160)1.0012700×1061.27000×103(239.0277, 159.8959)0.97780.9996601×1063.3993×104(237.9878, 158.7605)2.36331
(240, 320)0.9969717×1063.02829×103(239.9595, 319.4118)0.58961.0000098×1069.84×106(238.2006, 320.1384)1.80481
(400, 160)1.0028122×1062.81218×103(399.3107, 159.8861)0.69870.9998814×1061.1862×104(398.1673, 159.2641)1.97491
(400, 320)1.0023331×1062.33315×103(398.8879, 317.7069)2.54861.0000925×1069.248×105(398.1597, 319.5542)1.89351

As shown in Fig. 3, the proposed method works very well for all distortion parameters in the test interval and removes image radial distortion with high accuracy. From Table 2, we can see that our method gives good results about the distortion parameter (x0,y0,λ). The results in terms of relative estimation error for λ in Table 2 show quite clearly that our method is extremely accurate at estimating λ, and for estimated results of Rel based on the LM method at the range of 104 to 106. The estimated results of Dis based on the LM method are less than 3 pixels. Hough entropy is always equal to 1, which shows that our method is robust.

3.1.3.

Comparison to another technique

To gauge the accuracy of our method, we have compared it to the method developed by Alvarez et al.22,23 Alvarez et al. have deployed a demo web site37 for their method that allows users to submit an image for removing distortion after manually selecting distorted lines from it. For a fair comparison, the same three lines were used in both Alvarez’s method and our method. A synthetic image was generated from the source image with given information of distortion parameters (320,240,λ=1.0×106). The synthetic image, corrected images, and the corresponding 1-D Hough transforms are presented in Fig. 4. Dis, (x0,y0)estimate, and H can be found in Table 3. Compared to the source image (Fig. 1), the content of the corrected image [Fig. 4(b)] which generated from Alvarez’s method is zoomed out, and content of the corrected image [Fig. 4(c)] which is generated from the proposed method is unchanged. As shown in Fig. 4, the proposed method outperforms Alvarez’s method in terms of visual qualities. From Table 3, we can see quantitatively that the proposed method has dramatically reduced Dis, and the Hough entropy is equal to the source image. Moreover, compared to Alvarez’s method which requires manual intervention to select distorted straight lines, the proposed method requires much less processing time.

Fig. 4

(a) Synthetic image and corresponding 1-D Hough transforms. (b) Corrected image of (a) which generated from Alvarez’s method and corresponding 1-D Hough transforms. (c) Corrected image of (a) which generated from the proposed method and corresponding 1-D Hough transforms.

OE_56_1_013108_f004.png

Table 3

Results comparison of the synthetic image from Fig. 4.

Algorithm(x0,y0)estimateDisH
Alvarez et al.(330.0076, 235.0030)11.18581.7206
Proposed(319.6754, 239.1714)0.88991

3.2.

Tests on Real Images

The original tested real images and corrected images are shown in Fig. 5, the original image of (a)–(g) are obtained from the Image Processing on Line website.38 From Fig. 5, we can obviously observe the distortion (left in a pair) as well as correction (right in a pair). These results demonstrated that the radial distortion has been successfully removed in the recovered images. It shows the robustness and accuracy of the proposed approach in the radial distortion correction.

Fig. 5

Correction of real images. Some lines that are straight in the world have been annotated with red straight lines in the corrected image, showing strong vanishing points. (a) (b) (c) (d) (g) are building, (e) bedroom, (f) solar power plant, and (h) the ceiling of corridor.

OE_56_1_013108_f005.png

For the quantitative evaluation, we have compared the proposed method to Zhang’s method5 and Alvarez’s method. For Zhang’s method, it has to use a calibration pattern (checkerboard with black-and-white squares) to estimate the camera’s intrinsic parameters, therefore, its process takes a much longer time. For the comparison of Alvarez’s method and the proposed method, the same three distorted lines were taken for the purpose. The real image, corrected images, and the corresponding 1-D Hough transforms are presented in Fig. 6. As expected, the proposed method outperforms Zhang’s method and Alvarez’s method in terms of visual qualities. Compared to Zhang’s method which involves camera calibration and Alvarez’s method which requires manual intervention to select distorted straight lines, the proposed method is much faster in terms of processing time. The probability distribution in 1-D θ Hough space in Fig. 6(d) shows that our method is much more uniform at 0 deg and 180 deg compared to those in Figs. 6(b) and 6(c). This means that the proposed method has a more satisfactory result in removing image radial distortion.

Fig. 6

(a) Real image and corresponding 1-D Hough transforms. (b) Corrected image of (a) which generated from Zhang’s method and corresponding 1-D Hough transforms. (c) Corrected image of (a) which generated from Alvarez’s method and corresponding 1-D Hough transforms. (d) Corrected image of (a) which generated from the proposed method and corresponding 1-D Hough transforms.

OE_56_1_013108_f006.png

4.

Conclusions

In this paper, we proposed an approach for correcting image radial distortion caused by lens. This method works on a single image and does not require a special calibration pattern. Experimental results have shown a significant achievement in correcting image radial distortion in both synthetic and real images. The key contributions of the study can be summarized in three aspects. (1) The proposed method is accurate and robust in estimating radial distortion. It is extremely useful in many applications, particularly for those where human-made environments contain abundant lines. Although the proposed method requires at least three distorted lines residing in an image, it can cope with a situation in which fewer lines are found by adding more images taken by a same camera with different capturing angles. As long as there is a line involved in the scene, the proposed method is applicable. (2) The quantitative evaluation of the estimated radial distortion parameters has been achieved by the defined measure of Hough entropy based on the probability distribution in 1-D θ Hough space. (3) The “Taubin fit” technique has shown its positive effect in the initial guess of an arc’s parameters. It has significantly improved the convergence rate in the process of the LM iterative nonlinear least squares method to calculate an arc’s parameters.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 51575388).

References

1. 

J. H. Brito et al., “Radial distortion self-calibration,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR ‘13), 1368 –1375 (2013). http://dx.doi.org/10.1109/CVPR.2013.180 Google Scholar

2. 

M. R. Bax and R. Shahidi, “Real-time lens distortion correction: speed, accuracy and efficiency,” Opt. Eng., 53 (11), 113103 (2014). http://dx.doi.org/10.1117/1.OE.53.11.113103 Google Scholar

3. 

J. Wang et al., “A new calibration model of camera lens distortion,” Pattern Recognit., 41 (2), 607 –615 (2008). http://dx.doi.org/10.1016/j.patcog.2007.06.012 PTNRA8 0031-3203 Google Scholar

4. 

F. Wu et al., “Deep space exploration panoramic camera calibration technique based on circular markers,” Acta Opt. Sin., 33 (11), 1115002 (2013). http://dx.doi.org/10.3788/AOS GUXUDC 0253-2239 Google Scholar

5. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (11), 1330 –1334 (2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

6. 

A. Wang, T. Qiu and L. Shao, “A simple method of radial distortion correction with centre of distortion estimation,” J. Math. Imaging Vision, 35 (3), 165 –172 (2009). http://dx.doi.org/10.1007/s10851-009-0162-1 Google Scholar

7. 

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vision Appl., 13 (1), 14 –24 (2001). http://dx.doi.org/10.1007/PL00013269 Google Scholar

8. 

Z. Kukelova and T. Pajdla, “A minimal solution to radial distortion autocalibration,” IEEE Trans. Pattern Anal. Mach. Intell., 33 (12), 2410 –2422 (2011). http://dx.doi.org/10.1109/TPAMI.2011.86 ITPIDJ 0162-8828 Google Scholar

9. 

A. W. Fitzgibbon, “Simultaneous linear estimation of multiple view geometry and lens distortion,” in Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01), 125 –132 (2001). http://dx.doi.org/10.1109/CVPR.2001.990465 Google Scholar

10. 

D. Claus and A. W. Fitzgibbon, “A rational function lens distortion model for general cameras,” in IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’05), 213 –219 (2005). http://dx.doi.org/10.1109/CVPR.2005.43 Google Scholar

11. 

G. P. Stein, “Lens distortion calibration using point correspondences,” in Proc., 1997 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 602 –608 (1997). http://dx.doi.org/10.1109/CVPR.1997.609387 Google Scholar

12. 

B. Micusik and T. Pajdla, “Structure from motion with wide circular field of view cameras,” IEEE Trans. Pattern Anal. Mach. Intell., 28 (7), 1135 –1149 (2006). http://dx.doi.org/10.1109/TPAMI.2006.151 ITPIDJ 0162-8828 Google Scholar

13. 

R. Hartley and S. B. Kang, “Parameter-free radial distortion correction with center of distortion estimation,” IEEE Trans. Pattern Anal. Mach. Intell., 29 (8), 1309 –1321 (2007). http://dx.doi.org/10.1109/TPAMI.2007.1147 ITPIDJ 0162-8828 Google Scholar

14. 

S. Ramalingam, P. Sturm and S. K. Lodha, “Generic self-calibration of central cameras,” Comput. Vision Image Understanding, 114 (2), 210 –219 (2010). http://dx.doi.org/10.1016/j.cviu.2009.07.007 CVIUF4 1077-3142 Google Scholar

15. 

B. Precott and G. F. McLean, “Line-based correction of radial lens distortion,” Graphical Models Image Process., 59 (1), 39 –47 (1997). http://dx.doi.org/10.1006/gmip.1996.0407 Google Scholar

16. 

M. Ahmed and A. Farag, “Nonmetric calibration of camera lens distortion: differential methods and robust estimation,” IEEE Trans. Image Process., 14 (8), 1215 –1230 (2005). http://dx.doi.org/10.1109/TIP.2005.846025 IIPRE4 1057-7149 Google Scholar

17. 

Jean-Yves Bouguet, “Camera calibration toolbox for MATLAB,” (2015) http://www.vision.caltech.edu/bouguetj/calib_doc/ October 2016). Google Scholar

18. 

D. Li, G. Wen and S. Qiu, “Cross-ratio-based line scan camera calibration using a planar pattern,” Opt. Eng., 55 (1), 014104 (2016). http://dx.doi.org/10.1117/1.OE.55.1.014104 Google Scholar

19. 

F. C. M. Alanis and J. A. M. Rodriguez, “Self-calibration of vision parameters via genetic algorithms with simulated binary crossover and laser line projection,” Opt. Eng., 54 (5), 053115 (2015). http://dx.doi.org/10.1117/1.OE.54.5.053115 Google Scholar

20. 

J. P. Barreto, “A unifying geometric representation for central projection systems,” Comput. Vision Image Understanding, 103 (3), 208 –217 (2006). http://dx.doi.org/10.1016/j.cviu.2006.06.003 CVIUF4 1077-3142 Google Scholar

21. 

F. Bukhari and M. N. Dailey, “Automatic radial distortion estimation from a single image,” J. Math. Imaging Vision, 45 (1), 31 –45 (2013). http://dx.doi.org/10.1007/s10851-012-0342-2 Google Scholar

22. 

L. Alvarez et al., “An algebraic approach to lens distortion by line rectification,” J. Math. Imaging Vision, 35 (1), 36 –50 (2009). http://dx.doi.org/10.1007/s10851-009-0153-2 Google Scholar

23. 

L. Alvarez, L. Gomez and J. R. Sendra, “Algebraic lens distortion model estimation,” Image Process. On Line, 1 1 –10 (2010). http://dx.doi.org/10.5201/ipol.2010.ags-alde Google Scholar

24. 

M. Aleman-Flores et al., “Automatic lens distortion correction using one-parameter division models,” Image Process. On Line, 4 327 –343 (2014). http://dx.doi.org/10.5201/ipol.2014.106 Google Scholar

25. 

M. Aleman-Flores et al., “Line detection in images showing significant lens distortion and application to distortion correction,” Pattern Recognit. Lett., 36 261 –271 (2014). http://dx.doi.org/10.1016/j.patrec.2013.06.020 PRLEDG 0167-8655 Google Scholar

26. 

D. Santana-Cedres et al., “Invertibility and estimation of two-parameter polynomial and division lens distortion models,” SIAM J. Imaging Sci., 8 (3), 1574 –1606 (2015). http://dx.doi.org/10.1137/151006044 Google Scholar

27. 

C. Herrera, J. Kannala and J. Heikkila, “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Anal. Mach. Intell., 34 (10), 2058 –2064 (2012). http://dx.doi.org/10.1109/TPAMI.2012.125 ITPIDJ 0162-8828 Google Scholar

28. 

J. Smisek, M. Jancosek and T. Pajdla, “3D with Kinect,” in IEEE Int. Conf. on Computer Vision Workshops (ICCV Workshops ‘11), 1154 –1160 (2011). http://dx.doi.org/10.1109/ICCVW.2011.6130380 Google Scholar

29. 

D. C. Brown, “Decentering distortion of lenses,” Photometric Eng., 32 (3), 444 –462 (1966). Google Scholar

30. 

D. C. Brown, “Close-range camera calibration,” Photogramm. Eng., 37 (8), 855 –866 (1971). Google Scholar

31. 

J. G. Fryer and D. C. Brown, “Lens distortion for close-range photogrammetry,” Photogramm. Eng. Remote Sens., 52 (1), 51 –58 (1986). Google Scholar

32. 

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Rob. Autom., 3 (4), 323 –344 (1987). http://dx.doi.org/10.1109/JRA.1987.1087109 IJRAE4 0882-4967 Google Scholar

33. 

T. A. Clarke and J. G. Fryer, “The development of camera calibration methods and models,” Photogrammetric Rec., 16 (91), 51 –66 (1998). http://dx.doi.org/10.1111/phor.1998.16.issue-91 Google Scholar

34. 

G. Taubin, “Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 13 (11), 1115 –1138 (1991). http://dx.doi.org/10.1109/34.103273 ITPIDJ 0162-8828 Google Scholar

35. 

N. Chernov, Circular and Linear Regression: Fitting Circles and Lines by Least Squares, Chapman & Hall/CRC, Boca Raton, Florida (2010). Google Scholar

36. 

R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM, 15 (1), 11 –15 (1972). http://dx.doi.org/10.1145/361237.361242 CACMA2 0001-0782 Google Scholar

37. 

L. Alvarez, L. Gómez and J. R. Sendra, “Algebraic lens distortion model estimation,” (2015) http://demo.ipol.im/demo/ags_algebraic_lens_distortion_estimation/ September 2016). Google Scholar

38. 

Image Processing On Line, (2016) http://www.ipol.im/pub/art/ April ). 2016). Google Scholar

Biography

Fanlu Wu received his BS degree from the School of Opto-Electronic Engineering, Changchun University of Science and Technology, China, in 2011 and his MS degree from the University of the Chinese Academy of Sciences, China, in 2014. He is currently pursuing his PhD in the School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, China. His research interests include camera calibration, image mosaics, and image super-resolution reconstruction.

Hong Wei received her PhD from Birmingham University in 1996. She worked as a postdoctoral research assistant on a Hewlett Packard sponsored project, high-resolution CMOS camera systems. She also worked as a research fellow on an EPSRC-funded Faraday project, model from movies. She joined the University of Reading in 2000. Her current research interest includes intelligent computer vision and its applications in remotely sensed images and face recognition (biometric).

Xiangjun Wang received his BS, MS, and PhD degrees in precision measurement technology and instruments from Tianjin University, China, in 1980, 1985, and 1990, respectively. Currently, he is a professor and director of the precision measurement system research group at Tianjin University. His research interests include photoelectric sensors and testing, computer vision, image analysis, MOEMS, and MEMS.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Fanlu Wu, Hong Wei, and Xiangjun Wang "Correction of image radial distortion based on division model," Optical Engineering 56(1), 013108 (24 January 2017). https://doi.org/10.1117/1.OE.56.1.013108
Received: 14 October 2016; Accepted: 5 January 2017; Published: 24 January 2017
Lens.org Logo
CITATIONS
Cited by 23 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Distortion

Hough transforms

Distributed interactive simulations

Calibration

Image processing

Cameras

Optical engineering

RELATED CONTENT


Back to Top