Open Access
27 July 2017 Operator-based homogeneous coordinates: application in camera document scanning
Author Affiliations +
Abstract
An operator-based approach for the study of homogeneous coordinates and projective geometry is proposed. First, some basic geometrical concepts and properties of the operators are investigated in the one- and two-dimensional cases. Then, the pinhole camera model is derived, and a simple method for homography estimation and camera calibration is explained. The usefulness of the analyzed theoretical framework is exemplified by addressing the perspective correction problem for a camera document scanning application. Several experimental results are provided for illustrative purposes. The proposed approach is expected to provide practical insights for inexperienced students on camera calibration, computer vision, and optical metrology among others.

1.

Introduction

Projective geometry is an important topic in computer vision because it provides a useful camera imaging model and its fundamental properties.1 Some applications of this topic are found in camera motion,2 camera calibration,3,4 pose estimation for augmented reality,5 perspective correction,6 and three-dimensional (3-D) surface imaging7 among others.

Theoretical concepts of projective geometry are analyzed simply and elegantly using homogeneous coordinates.8,9 However, projective geometry is commonly presented in abstract form, leaving a gap in how to apply it in computer vision problems.10 Moreover, homogeneous coordinates are used with a notation that masks basic geometrical aspects and may confuse the inexperienced readers.11

In this paper, a simple and intuitive approach to expose some useful concepts of projective geometry is addressed. For this, an alternative notation for homogeneous coordinates based on operators is suggested. To highlight the relevance of this topic in computer vision, the presentation is motivated by a specific problem, namely the perspective correction for a “camera scanner” application.

First, the proposed operators for homogeneous coordinates are defined in Sec. 2. Next, some basic concepts of projective geometry in the one- (1-D) and two-dimensional (2-D) cases are presented in Secs. 3 and 4, respectively. Then, the pinhole camera model is derived in Sec. 5. A perspective correction method, useful for camera document scanning, is described in Sec. 6. Finally, the conclusions of this work are given in Sec. 7. The paper is complemented with two appendices. Appendix A presents the direct linear transformation method for homography matrix estimation. Finally, a simple method to obtain the camera parameters from homographies is explained in Appendix B.

2.

Definition of Operators

2.1.

Operators H and S

A point in an n-dimensional space will be represented by a vector of the form

Eq. (1)

x=[x1x2xn]T,
where [·]T denotes the transpose. The homogeneous coordinates of the point are obtained by adding an extra entry to x with a value equal to the unity. The result is the (n+1)-dimensional vector

Eq. (2)

H[x]=[x1],
where H will be referred to as the homogeneous operator.

The last entry of a homogeneous vector is known as the scale and will be recovered by the scale operator S. This operator returns the last entry of any given vector. For instance, for the vectors in Eqs. (1) and (2), we have

Eq. (3)

xn=S[x],and1=S[H[x]].

The operator H sets the scale to unity. Another operator that sets the scale to zero is needed. For this, we define the operator

Eq. (4)

H0[x]=[x0].
Note that the operator H0 does not affect neither the direction nor the norm of x. In projective geometry, the points represented by homogeneous coordinates of the form

Eq. (5)

H0[x],x0n,
are known as ideal points, where 0n=[0,,0]T is the n-dimensional zero vector.

The operators H and H0 can be considered as two particular cases of a more general operator defined as

Eq. (6)

Hs[x]=[xs],
where s is any scalar.

The procedure of adding an extra entry to vectors is reverted by returning the given vector except its last entry. For this, we define the inverse operator H01 as follows. For any (n+1)-dimensional vector

Eq. (7)

y=[y1y2ynyn+1]T,
the operator H01 is defined as

Eq. (8)

H01[y]=[y1y2yn]T.
Based on the operator H01, the inverse of the operator Hs for s0 is defined as

Eq. (9)

Hs1[y]=sS[y]H01[y].
In particular, the operator H11 (written simply as H1) will be referred to as the inverse homogeneous operator.

The inverse H01 is a linear operator. That is, for any two scalars λ1 and λ2, we have

Eq. (10)

H01[λ1y1+λ2y2]=λ1H01[y1]+λ2H01[y2].
On the other hand, the operator Hs1 is invariant to nonzero scalar multiplication of its argument. That is,

Eq. (11)

Hs1[λy]=Hs1[y],s,λ0.
The operators Hs and Hs1 can be expressed in terms of the homogeneous operator and its inverse, namely

Eq. (12)

Hs[x]=ΞsH[x],andHs1[y]=H1[Ξs1y],s0,
where

Eq. (13)

Ξs=[In0n0nTs],
and In being the n×n identity matrix.

2.2.

Projection Operator P

In general terms, the homogeneous operator carries the representation of a point from n- to (n+1)-dimensional vectors while the inverse homogeneous operator returns the representation from (n+1)- to n-dimensional vectors. An important transformation emerges when, in the (n+1)-dimensional space, a linear mapping is applied. Mathematically, we describe this transformation by the projection operator defined as

Eq. (14)

PM[x]=H1[MH[x]],
where M is an (n+1)×(n+1) matrix. A generalized version of the projection operator is obtained using Hs and its inverse as

Eq. (15)

PM,s[x]=Hs1[MHs[x]].
From Eq. (11), it follows that, for s0, the operator PM,s is invariant to nonzero scalar multiplication of the matrix M; that is

Eq. (16)

PλM,s[x]=PM,s[x],s,λ0.
Let b=PM,s[a] with M being a nonsingular matrix. From Eq. (15), we have that a=Hs1[M1Hs[b]]. Therefore, the inverse operator PM,s1 is given by

Eq. (17)

PM,s1[x]=PM1,s[x],detM0,
where detM denotes the determinant of M.

Using the equations in Eq. (12), the operator PM,s can be expressed in terms of the projection operator, namely

Eq. (18)

PM,s[x]=PΞs1MΞs[x],s0.
Note that M and Ξs1MΞs are similar matrices. Some useful equalities of the defined operators are summarized in Table 1. For a more comprehensible reading of this paper, the reader is encouraged to demonstrate all the equalities in Table 1.

Table 1

Some useful equalities of the operators S, Hs, and PM,s. In all cases, we consider λ≠0, γ1 and γ2 are any scalars, x is a n-dimensional vector as given in Eq. (1), Ξs is the matrix defined in Eq. (13), y=λHs[x], M is a matrix of size (n+1)×(n+1), and W is a matrix of size m×(n+1).

PropertyDescription
(P1)Hs1[Hs[x]]=xs0
(P2)Hs[Hs1[y]]=sS[y]ys0
(P3)x=Hs1[y]S[y]sHs[x]=ys0
(P4)Hs[λx]=λHs/λ[x]
(P5)λHs[x]=Hλs[λx]
(P6)Hs[WHs[x]]=[WH[0n]T]Hs[x]
(P7)Hs[x1±x2]=Hs1[x1]±Hs2[x2]s1±s2=s
(P8)Hs1[λy]=Hs1[y]s0
(P9)λHs1[y]=Hλs1[y]s0
(P10)H01[γ1y1+γ2y2]=γ1H01[y1]+γ2H01[y2]
(P11)S[γ1x1+γ2x2]=γ1S[x1]+γ2S[x2]
(P12)Hs1[y1+y2]=1S[y1+y2](S[y1]Hs1[y1]+S[y2]Hs1[y2])s0
(P13)Hs1[y1+y2]Hs1[y1]=S[y2]S[y1+y2](Hs1[y2]Hs1[y1])s0
(P14)Hs[x]=ΞsH[x]
(P15)Hs1[y]=H1[Ξs1y]s0
(P16)PλM,s[x]=PM,s[x]s0
(P17)PM,s1[x]=PM1,s[x]detM0
(P18)x2=PM,s[x1]PM1,s[x2]=x1detM0
(P19)PM,s[λx]=λPM,s/λ[x]
(P20)λPM,s[x]=PM,λs[λx]
(P21)PM2,s[PM1,s[x]]=PM2M1,s[x]
(P22)PIn,s[x]=x
(P23)PM,s[x]=PΞs1MΞs[x]s0
(P24)Hs[PM,s[x]]=sS[MH[x]]MH[x]s0
(P25)PM,s[Hs1[y]]=Hs1[My]s0

In the following sections, the defined operators are studied from an intuitive geometrical approach for the 1-D and 2-D cases. Then, the usefulness of this theoretical framework is illustrated by addressing the perspective correction problem for camera document scanning.

3.

One-Dimensional Space

The 1-D real space can be represented as a line as shown in Fig. 1(a). In this space, a point at a finite distance from the origin is represented by a real number x; otherwise, the point is represented by the symbol .

Fig. 1

(a) The real line as the 1-D Euclidean space. (b) The 1-D space represented by the projective line (y=1) in a 2-D Euclidean space.

OE_56_7_070801_f001.png

Alternatively, the 1-D space can be represented by the projective line y=1 in the xy-plane as shown in Fig. 1(b). Thus, the coordinate x of a point in the line becomes the vector

Eq. (19)

y=H[x]=[x1].
The coordinate x can be recovered from its homogeneous version y as the intersection between the line y=1 and the line with direction H[x] passing through the origin as shown in Fig. 1(b). This is described mathematically as

Eq. (20)

x=H1[y].
Note that the result is invariant to the scalar multiplication of y by a nonzero scalar [e.g., λ and γ as shown in Fig. 1(b)] because the intersection between lines is unaltered. In other words, H1[λy]=H1[γy]=H1[y] as stated by Eq. (11).

3.1.

Ideal Point

Homogeneous coordinates provide a different form to identify points of the real line. Consider the unit vector

Eq. (21)

u(θ)=[sinθcosθ].
Thus, the homogeneous representation of x given by Eq. (19) becomes

Eq. (22)

H[x]=λu(θ),
where λ2=1+x2 and tanθ=x. From Eq. (22), we obtain

Eq. (23)

x=H1[u(θ)].
Since a vector u and its opposite u represent the same point (i.e., H1[u]=H1[u]), all points of the real line at a finite distance from the origin are associated to a unique angle θ in the open interval (π/2,π/2); i.e., the vectors u different to [1, 0] and [1,0] in the quadrants I and II, as shown in Fig. 2.

Fig. 2

Representation of points of the real line using homogeneous coordinates. Opposite homogeneous vectors represent the same point; thus, there is a single point at infinity, given by u(π/2)=[1,0]T.

OE_56_7_070801_f002.png

Intuitively, the real line in Euclidean representation has two points at infinity, namely and +. However, in projective geometry, the real line has only a single point at infinity given by the homogeneous coordinates

Eq. (24)

ψ=[10],
which is associated to u(π/2), as shown in Fig. 2. It could be argued that π/2 corresponds to + while π/2 to . However, note that u(π/2)=ψ is the opposite of ψ. Hence, they represent the same point.

Note that H1[ψ]=1/0 is consistent with the notion that ψ represents a point at infinity distance from the origin. According to the concepts of projective geometry, the vector ψ represents an ideal point, see Eq. (5).

3.2.

One-Dimensional Projection

The line y=0 can be transformed to any other line by applying a rotation Q=[q1,q2] and a translation s. Thus, a point in the line y=0, represented by the scalar x, becomes a point in the xy-plane given by the vector

Eq. (25)

p=qH0[x]+s=Π1H[x],
where the matrix Π1 will be referred to as the reference line parameters and has the explicit form

Eq. (26)

Π1=[q1s].
The first column of Π1 and the determinant detΠ1 provide the direction of the reference line and its distance from the origin, respectively.

If the matrix Π1 is singular, the vectors q1 and s are collinear. In this case, the origin is a point of the transformed line (the distance of the line from the origin is zero). The matrix Π1 is nonsingular when q1 and s are linearly independent. In this case, the origin is not a point of the transformed line.

Let p in Eq. (25) be the homogeneous coordinates of a point α in the line. Thus, we obtain the 1-D projection

Eq. (27)

α=H1[p]=H1[Π1H[x]]=PΠ1[x].
The transformations by PΠ1[x] and its inverse PΠ11[α] are shown in Fig. 3.

Fig. 3

The 1-D projection.

OE_56_7_070801_f003.png

4.

Two-Dimensional Space

4.1.

Points and Lines in the Plane

Any point in the 2-D space can be represented as the vector

Eq. (28)

x=[x1x2]T.
Moreover, the point x can be represented by its homogeneous coordinates

Eq. (29)

H[x]=[x1],
as shown in Fig. 4(a). Note that H takes the 2-D vector x (in the plane z=0) and converts it to the 3-D vector H[x], where x is unaltered but now it lies in the projective plane z=1. It is worth mentioning that the vector x can be recovered from λH[x] as the point of intersection of the line with points 03 and λH[x], see Eq. (11). That is,

Eq. (30)

x=H1[λH[x]],λ0.
A line in the plane xy-plane can be written as the homogeneous equation

Eq. (31)

l1x1+l2x2+l3=0,
where l1, l2, and l3 are coefficients. Using homogeneous coordinates, Eq. (31) becomes

Eq. (32)

lTH[x]=0,
where x=[x1,x2]T is a point of the line and l=[l1,l2,l3]T is the vector that defines the line. Equation (32) exhibits that l and H[x] are orthogonal vectors. Note that the vector l is unique up to scale, i.e., the vectors l and λl, with λ0, represent the same line.

Fig. 4

(a) The 2-D space represented by the projective plane (z=1). (b) Parallel lines in the plane.

OE_56_7_070801_f004.png

Let x1 and x2 be two different points in the xy-plane. The vector l of the line passing through x1 and x2 can be obtained by the cross product as

Eq. (33)

l=H[x1]×H[x2].
By definition of the cross product, the vector l is orthogonal to H[x1] and H[x2]. Therefore, these vectors satisfy Eq. (32).

Consider two lines defined by the vectors l1 and l2. If x is the intersection point of these lines, then H[x] is orthogonal to l1 and l2. That is

Eq. (34)

λH[x]=l1×l2,
where λ=S[l1×l2]. Therefore, the intersection point of the lines l1 and l2 is

Eq. (35)

x=H1[l1×l2].

4.2.

Parallel Lines

Two different lines are parallel if its defining vectors are of the form

Eq. (36)

l=[l1l2l3],l¯=λ[l1l2l3+δ],
where λ, δ0. This can be verified as follows. Consider two parallel lines in the plane with points α and β given, respectively, by

Eq. (37)

α=a+γd,andβ=a+γd+δt,
where γ is a parameter, δ0 is a constant, a is a reference point, d is a unit vector with direction of the line, and t is a unit vector orthogonal to d, i.e.,

Eq. (38)

H0[t]×H0[d]=H[02],
as shown in Fig. 4(b). Two points of each line are
α1=a+γ1d,β1=a+γ¯1d+δt,α2=a+γ2d,β2=a+γ¯2d+δt.
Thus, the vector of the line with points α is

Eq. (39)

l=H[α1]×H[α2]=(γ2γ1)H[a]×H0[d],
or, since the line is unaffected by scaling of its vector

Eq. (40)

l=H[a]×H0[d].
Similarly, the vector of the line with points β is

Eq. (41)

l¯=H[β1]×H[β2]=(γ¯2γ¯1)(l+δH[02])=λ(l+δH[02]),
where λ=γ¯2γ¯1. Therefore, the vectors l and l¯ given in Eq. (36) represent parallel lines.

It is worth mentioning that, if l is the vector of a line with direction d [see Eq. (40)], then the vector H01[l] is orthogonal to d, namely

Eq. (42)

dTH01[l]=dTH01[H[a]×H0[d]]=H0[d]T(H[a]×H0[d])=0.

4.3.

Ideal Points and the Line at Infinity

In the Euclidean geometry, two parallel lines in the plane do not intersect. However, in the projective geometry, two different lines always intersect at a point. Consider the parallel lines given by the vectors in Eq. (36). Using Eq. (35), the intersection point is

Eq. (43)

H1[l×l¯]=H1[H[02]×l]=H1[ψ],
where

Eq. (44)

ψ=[l2l10]T
is the point of intersection in homogeneous coordinates. Note that H1[ψ]=[l2/0,l1/0] provides the insight that parallel lines intersecting at a point at infinity. As in the 1-D case, the vector ψ represents ideal points, see Eq. (5).

The vector ψ is associated with the direction d of the line l. This is verified by taking into account that H01[l] is orthogonal to d [Eq. (42)] as well as to H01[ψ] (H01[ψ]TH01[l]=0), then

Eq. (45)

H01[ψ]=λd,
where λ is some nonzero scalar.

All ideal points given by Eq. (44) are collinear. The vector of such a line, known as the line at infinity, is

Eq. (46)

l=H[02]=[001]T.
This can be easily verified by lTψ=0 as required by Eq. (32).

The ideal point ψ in Eq. (44) was obtained as the intersection of two parallel lines l and l¯. However, the intuition suggests that the same result could be obtained by computing the intersection of the line l and the line at infinity l. In fact, we have that

Eq. (47)

ψ=l×l.
Thus, using Eq. (45), the direction d of any line l is given by

Eq. (48)

λd=H01[l×l],
where λ is a nonzero scale factor. For this reason, the line l is interpreted as the set of directions of lines in the plane.

Similar to the 1-D case, homogeneous coordinates provide a different form to identify points of the plane. Consider the unit vector

Eq. (49)

v=[sinθcosϕsinθsinϕcosθ]T,
where θ and ϕ are polar and azimuth angles, respectively. Thus, the homogeneous coordinates for each point x=[x1,x2]T on the plane are given by

Eq. (50)

H[x]=λv,
where λ2=1+x12+x22. From Eq. (50), the following relation holds

Eq. (51)

x=H1[v(θ,ϕ)].

The points of the plane at a finite distance from the origin are given by v(θ,ϕ) with θ[0,π/2) and ϕ[π,π), i.e., the upper hemisphere of the unit sphere, see Fig. 5. The points of the plane at infinity distance from the origin are parameterized by θ=π/2 and ϕ(π/2,π/2]. These points have the homogeneous coordinates

Eq. (52)

v=[cosϕsinϕ0]T,
see Eq. (44). That is, the ideal points are represented by the half equator of the unit sphere, see yellow line in Fig. 5.

Fig. 5

Representation of points of the plane using homogeneous coordinates v. The upper hemisphere represents points of the plane at a finite distance from the origin, and the half of the equator (yellow semicircle) represents points at infinity.

OE_56_7_070801_f005.png

4.4.

Two-Dimensional Projection

Any plane in the 3-D space can be obtained as the plane z=0 after a rotation Q=[q1,q2,q3] and a translation s. Thus, the points represented by x=[x1,x2]T, becomes

Eq. (53)

p=QH0[x]+s=Π2H[x],
where the matrix Π2 will be referred to as the reference plane parameters and has the explicit form

Eq. (54)

Π2=[q1q2s].
The cross product of the first two columns of Π2 and detΠ2 provides the normal to the reference plane and its distance from the origin, respectively.

The matrix Π2 is singular when r1, r2, and t are coplanar. In this case, the origin 03 is a point of the transformed plane (the distance of the reference plane from the origin is zero). Otherwise, Π2 is a nonsingular.

Let p in Eq. (53) be the homogeneous coordinates of a point α in the projective plane. Thus, the relation between the points α and x is given by the 2-D projection PΠ2, namely

Eq. (55)

α=H1[p]=H1[Π2H[x]]=PΠ2[x].
The projection PΠ2[x] and its inverse PΠ21[α] are shown in Fig. 6.

Fig. 6

The 2-D projection.

OE_56_7_070801_f006.png

4.5.

Properties of the Two-Dimensional Projection

As shown in Fig. 6, the 2-D projection PΠ2 excludes several geometrical properties; e.g., shape, angles, lengths, and ratio of lengths. Fortunately, there are some geometrical properties that are preserved. Particularly, we are interested in three of them that are very useful in practice: namely straightness, line–line intersection, and parallelism of the normal and line at infinity vectors.

4.5.1.

Straightness property

This property states that a 2-D projection transforms lines to lines.12 This can be shown as follows. Consider a line with vector l and points x, that is

Eq. (56)

0=lTH[x].
Next, the points x are transformed to α by Eq. (55). Solving Eq. (55) for x and substituting in Eq. (56), we obtain

Eq. (57)

0=lTH[PΠ21[α]]=lTΠ21H[α]S[Π21H[α]],
or

Eq. (58)

0=mTH[α],
where

Eq. (59)

m=Π2Tl,
with Π2T being the abbreviation of (Π21)T or (Π2T)1. In summary, the points x of a line l are transformed by PΠ2 to points α of a new line m.

4.5.2.

Line–line intersection

Preservation of the line–line intersection by a 2-D projection refers to the following. If

Eq. (60)

x0=H1[l1×l2]
is the point where the lines l1 and l2 intersect, then

Eq. (61)

α0=PΠ2[x0]
is the point where intersect the lines

Eq. (62)

m1=Π2Tl1andm2=Π2Tl2.
In fact, the lines m1 and m2 intersect at the point

Eq. (63)

α0=H1[m1×m2]=H1[(Π2Tl1)×(Π2Tl2)]=H1[Π2(l1×l2)],
where the identity of the cross product

Eq. (64)

(Mu)×(Mv)=(detM)MT(u×v)
was applied. By solving Eq. (60) for l1×l2 and substituting in Eq. (63), we obtain

Eq. (65)

α0=H1[S[l1×l2]Π2H[x0]]=H1[Π2H[x0]]=PΠ2[x0].

4.5.3.

Parallelism of the normal and line at infinity vectors

The normal of the xy-plane and the vector l of the line at infinity are parallel. When the projection PΠ2 is applied, the normal q3 of the reference plane (with parameters Π2) and the new line at infinity m still remain parallel; i.e., m=λq3, λ0. Actually, the reference plane has the normal

Eq. (66)

q3=q1×q2,
see Eq. (54), whereas the vector of the new line at infinity is

Eq. (67)

m=Π2Tl=λ(cofΠ2)l=λ[q2×sq1×sq1×q2]l=λq3,
where λ=1/detΠ2, and cof(·) denotes the cofactor matrix.

In the following section, the developed theoretical framework is applied in a real problem.

5.

Pinhole Camera Model

In practice, the imaging process is performed by a camera lens device as shown in Fig. 7(a). This device produces high quality images because of a complicated system of lenses that minimizes aberration and distortion. However, the imaging process can be modeled using a single thin lens as shown in Fig. 7(b). Moreover, the imaging model can be easily derived using the equivalent pinhole camera as shown in Fig. 7(c).

Fig. 7

(a) Illustration of a camera lens. (b) The imaging process modeled using a single thin lens. (c) A pinhole camera. The planes z=f and z=f are the actual and conjugate image planes, respectively.

OE_56_7_070801_f007.png

In the pinhole camera, the origin of a coordinate system is fixed at the pinhole and the z-axis is parallel to the optical axis. The plane z=f, where f is the focal length, is the actual image plane. Note that the image is inverted; therefore, the x and y axes are reverted to describe the image as a magnified version of the object. The inversion of the axes is avoided using the conjugate image plane z=f as shown in Fig. 7(c).

5.1.

Centered Pinhole Camera

A typical representation of a pinhole camera is shown in Fig. 8. The coordinate system Ocxcyczc is known as the camera reference frame. Let

Eq. (68)

pc=[xcyczc]T
be the coordinates of a point in the camera reference frame. The point pc will be imaged in the plane zc=f at the point

Eq. (69)

β=[βxβy]T,
where β will be referred to as the physical image coordinates. The pinhole projection model relates the vectors pc and β by

Eq. (70)

[βxβyf]=fzc[xcyczc].
Using homogeneous coordinates for the image point β, Eq. (70) can be rewritten as

Eq. (71)

β=Hf1[pc]=H1[Ξf1pc].
The image formed in the sensor of the camera is sampled as an array of pixels. Then, the physical coordinates β will be transformed to the pixel coordinates

Eq. (72)

μ=[uv]T,
which depend on the size of the pixel and skew (diagonal distortion) as shown in Fig. 8. The sampling can be described as

Eq. (73)

u=(βx+τx)/sx+σβy,v=(βy+τy)/sy,
where sx and sy (with units of length) are the width and height of the pixel, respectively, τ=[τx,τy]T is known as the principal point and represents the point (from the uv-reference frame) where the optical axis crosses the image plane, and σ is the skew factor (σ=0 for most camera sensors). Equation (73) can be written as

Eq. (74)

[uv1]=[1/sxστx/sx01/syτy/sy001][βxβy1],
or using a compact notation

Eq. (75)

μ=H1[SH[β]]=PS[β],
where S is the sampling matrix given as

Eq. (76)

S=[1/sxστx/sx01/syτy/sy001].
Substituting Eq. (71) into Eq. (75), we obtain the image μ (in pixel coordinates) of the point pc as

Eq. (77)

μ=PS[H1[Ξf1pc]]=H1[SΞf1pc]=H1[Kpc],
where K=SΞf1 is known as the matrix of intrinsic camera parameters having the explicit form

Eq. (78)

K=[1/sxστx/fsx01/syτy/fsy001/f].
Since detK=(sxsyf)1, the matrix K is nonsingular for any experimental case.

Fig. 8

The centered pinhole camera and sampling of the image plane.

OE_56_7_070801_f008.png

Given a point μ (in pixel coordinates), the actual coordinates Hf[β] of an image point (physical coordinates on the image plane z=f) can be obtained from Eq. (75) as

Eq. (79)

Hf[β]=Hf[PS1[μ]]=ΞfH[PS1[μ]]=ΞfS1H[μ]/S[S1H[μ]]=K1H[μ],
where the equality S[S1H[μ]]=1 was used.

5.2.

Noncentered Pinhole Camera

Let us consider that the pinhole camera is at an arbitrary position and orientation with respect to a world coordinate system Oxyz as shown in Fig. 9. The position and orientation of the camera are defined by the vector t and the rotation matrix R, respectively. Let

Eq. (80)

p=[xyz]T
be a point in the world coordinate system. Then, the point p is seen from the camera reference frame as

Eq. (81)

pc=RT(pt)=LH[p],
where L is known as the matrix of extrinsic camera parameters having the explicit form

Eq. (82)

L=[RTRTt].
By substituting Eq. (81) into Eq. (77), the complete imaging process by a noncentered pinhole camera is given as

Eq. (83)

μ=H1[KLH[p]]=H1[CH[p]],
where C=KL is the matrix of the camera.

Fig. 9

The noncentered pinhole camera.

OE_56_7_070801_f009.png

5.3.

Homography Matrix

In general terms, Eq. (83) describes a transformation of points p of the 3-D space to points of the 2-D one. A very useful transformation is obtained when p represents points of a plane in the 3-D space. In this case, Eq. (83) is reduced to a transformation from the 2-D space to itself.

Consider that p represents the points of a plane in the 3-D space; mathematically, see Eq. (53)

Eq. (84)

p=ΠH[ρ],
where ρ=[ρx,ρy]T parameterizes the plane, Π=[q1,q2,s] is the matrix of the plane, q1 and q2 are columns of the rotation matrix Q=[q1,q2,q3]T, and s is a translation vector. Next, the points p are transformed to μ by Eq. (83) as

Eq. (85)

μ=H1[CH[ΠH[ρ]]]=H1[C[ΠH[02]T]H[ρ]]=H1[GH[ρ]]=PG[ρ],
where G is known as the homography matrix and has the explicit form

Eq. (86)

G=KL[ΠH[02]]=KRT[ΠtH[0]T]=KRTΠ¯,
where

Eq. (87)

Π¯=[ΠtH[0]T]=[q1q2st].
From Eqs. (79) and (85), a point ρ is imaged at the point with actual image coordinates

Eq. (88)

Hf[β]=K1H[μ]=K1H[PG[ρ]]=K1GH[ρ]/λ=RTΠ¯H[ρ]/λ,
where λ=S[GH[ρ]].

The homography matrix is singular when the pinhole is at a point of the reference plane. For any other case, detG=q3T(st)/(sxsyf) and Eq. (85) can be inverted as

Eq. (89)

ρ=PG1[μ].
The homography matrix is very useful for many computer vision tasks. In Appendix A, the direct linear transformation method for homography estimation is described.

6.

Perspective Correction for Document Scanning

A camera document scanning application performs several image processing tasks, such as quadrilateral detection, perspective correction, resampling, and image enhancement. In this section, the perspective correction task is addressed to illustrate the application of the proposed approach.

6.1.

Assumptions

In Appendix A, we show that the perspective of a flat object can be easily corrected using the associated homography. For this, at least four correspondences (μk,ρk) must be provided. However, for practical document scanning, the coordinates ρk are unknown. Instead, it is assumed that the document to be digitized is rectangular and the orthogonality and parallelism properties of its edges are exploited.

The estimation of the homography is greatly simplified by assuming a centered pinhole camera with known intrinsic parameters; e.g., by a previous camera calibration, see Appendix B. Thus, we only require to estimate the reference plane parameters Π, i.e., the rotation matrix Q and the translation vector s, see Eq. (84).

6.2.

Estimation of the Reference Plane Parameters

Consider a coordinate system in the reference plane with origin at the center of the document to be scanned as shown in Fig. 10(a). The x- and y-axes of this coordinate system are parallel with the upper/lower and left/right sides of the paper, respectively. The corners of the document to be digitized have coordinates given by the vectors

Eq. (90)

ρk,k=1,,4.
In this configuration, the vectors ρk are symmetric about the y-axis; that is

Eq. (91)

ρ2=Tρ1,ρ4=Tρ3,
where

Eq. (92)

T=[1001].
When the document is imaged by the camera, the original rectangle is transformed to a quadrilateral because of the perspective distortion. The corners of the imaged document have coordinates given by the vectors

Eq. (93)

μk,k=1,,4,
as shown in Fig. 10(b). The vectors μk and ρk are related by Eqs. (85) and (89); however, the vectors ρk and the homography G are unavailable. Only the vectors μk are available, which are easily obtained from the image by pointing the vertexes of the imaged document.

Fig. 10

(a) Depict of the rectangular paper to be digitized. (b) The quadrilateral obtained by pinhole imaging of a rectangular paper. (c) The remaining rotation after perspective correction of the quadrilateral shown in (b).

OE_56_7_070801_f010.png

The points μk are used to compute the following lines, see Fig. 10(b),

Eq. (94)

m1=H[μ3]×H[μ1],m4=H[μ1]×H[μ2],m2=H[μ2]×H[μ4],m5=H[μ4]×H[μ1],m3=H[μ3]×H[μ4],m6=H[μ2]×H[μ3].
Next, with the lines mk, the following three intersection points are computed

Eq. (95)

μ0=H1[m1×m2],μa=H1[m3×m4],μb=H1[m5×m6].
Since the intrinsic camera parameters are assumed to be known, the actual image coordinates of the points μi can be obtained by Eq. (79) as

Eq. (96)

Hf[βi]=K1H[μi],
with i=a,b,0,1,2,3,4.

6.2.1.

Normal vector

Note that the points μa and μb are the projections of the ideal points

Eq. (97)

H[ρa]=[100]T,H[ρb]=[010]T,
respectively. Thus, the line m (in physical coordinates on the image plane z=f) is parallel to the normal q3, see Eq. (67). That is,

Eq. (98)

n=Hf[βa]×Hf[βb]=q3/λ,
for some λ. Thus, the normal of the reference plane is obtained as the normalization of the vector n, namely

Eq. (99)

q3=n/n=KT[(m3×m4)×(m5×m6)]KT[(m3×m4)×(m5×m6)].

6.2.2.

Translation vector

The translation vector s is obtained by taking into account that PG preserves the line–line intersection. Thus, from Eq. (65) we have μ0=PG[ρ0] with ρ0=02. Therefore, from Eq. (88) we have

Eq. (100)

Hf[β0]=Π¯H[02]/ξ=s/ξ,
where ξ is a scalar to be determined. For this, note that the vectors

Eq. (101)

pk=ζkHf[βk],k=1,,4,
are points of the reference for values ζk and ξ such that the equation of the plane q3(pks)=0 is satisfied. This leads to

Eq. (102)

ζk=ξq3THf[β0]q3THf[βk].
Since the points ρk are on the unit circumference, see Fig. 10(a), then ρk=pks=1, which leads to

Eq. (103)

ξk=q3THf[β0]q3THf[βk]Hf[βk]Hf[β0]1,
where the subindex k in ξ emphasizes the fact that a different value could be obtained for each Hf[βk] due to inaccuracies of μk. Therefore, the value ξ is computed as

Eq. (104)

ξ=mean{ξ1,ξ2,ξ3,ξ4}.
The result is used in Eq. (100), and the translation of the reference plane is now available.

6.2.3.

Euler angles

The reference plane is fully characterized by six degrees of freedom (DOF), namely position (three coordinates) and orientation (three angles). The vectors q3 and s provide five DOFs. Specifically, the vector s provides three DOFs that fix the position while q3 provides two DOFs defining the orientation by the azimuth and polar angles given, respectively, by

Eq. (105)

tanϕ=q23/q13,cosθ=q33,
where [q13,q23,q33]T=q3 is the third column of the rotation matrix Q. The remaining angle γ (the angle around the normal q3) can be obtained as follows.

From Eqs. (84) and (53), we have

Eq. (106)

pk=QH0[ρk]+s,
where the matrix Q is defined as the Euler sequence

Eq. (107)

Q=Qz(ϕ)Qy(θ)Qz(γ),
with Qz and Qy being the rotation matrices around the z- and y-axes, respectively. Thus, using Eq. (101), the estimated vector s, and the angles θ and ϕ, we compute the (perspective corrected) points

Eq. (108)

δk=H01[QyT(θ)Qz(ϕ)T(pks)]=[δxkδyk],
with k=1,,4, see Fig. 10(c). The vectors δk and ρk are related by

Eq. (109)

ρk=Q¯zT(γ)δk,
where

Eq. (110)

Q¯zT(γ)=[cosγsinγsinγcosγ].
The vectors ρk are unavailable, but we use their symmetry properties given in Eq. (91) to obtain

Eq. (111)

Q¯zT(γ)δ2=TQ¯zT(γ)δ1,Q¯zT(γ)δ4=TQ¯zT(γ)δ3.
The product Q¯zT(γ)δk can be written as

Eq. (112)

R¯Tδk=B[δk]Γ,
where Γ=[sinγ,cosγ]T and

Eq. (113)

B[δk]=[δykδxkδxkδyk].
Thus, Eq. (111) can be rewritten as

Eq. (114)

BΓ=04,
where

Eq. (115)

B=[B[δ2]TB[δ1]B[δ4]TB[δ3]].
The nontrivial solution of Eq. (114) for Γ is obtained as the right-singular vector corresponding to the smallest singular value of B. Finally, the angle γ is obtained from Γ by

Eq. (116)

tanγ=H1[Γ].
The estimated parameters are used to create the matrix Π. Then, the required homography G is obtained by Eq. (86) (with t=03 and R=I because of the centered pinhole camera configuration). Finally, the perspective distortion of the image is corrected by displaying the intensity of each pixel of the image at the point ρ computed by Eq. (89).

6.3.

Illustrative Example

The functionality of the presented algorithm is illustrated by the following example. The camera described in Appendix B and the estimated intrinsic parameters K given in Eq. (156) are used here.

Figure 11(a) shows the image of a rectangular object acquired by the camera. Then, the four corners of the quadrilateral are marked from the image as shown by the yellow circles in Fig. 11(b). The points μ0, μa, and μb are indicated by the red circles in Fig. 11(b). It is worth mentioning that μa, or μb, or both could be points at infinity. Even in these cases, the presented methodology is valid.

Fig. 11

(a) An input image with a rectangular object in scene. (b) The corners of the rectangle are marked by yellow circles. (c) Corrected image. (d) Zoom of (c) highlighting the region of interest.

OE_56_7_070801_f011.png

The information estimated with the four corners are

Eq. (117)

s=[0.22890.05612.9236]T,ϕ=0.6776,θ=0.9879,γ=2.3041.
With these parameters, the matrix of the reference plane is

Eq. (118)

Π=[0.75280.10100.22890.34780.77790.05610.55880.62022.9236].
Thus, the resulting homography is

Eq. (119)

G=[2.02190.24290.72720.92372.07860.12980.55880.62022.9236].
All points μ of the image are transformed to points ρ of the reference plane by Eq. (89). Next, the pixels of the image are displayed at the points ρ as shown in Fig. 11(c).

With the correction of perspective, the yellow circles in Fig. 11(b) become the green ones in Fig. 11(c). The region of interest is the rectangle with corners marked by green circles in Fig. 11(c). Finally, a zoom of the region of interest is shown in Fig. 11(d).

7.

Conclusions

An operator-based approach for homogeneous coordinates was proposed. Several basic geometrical concepts and properties of the operators were investigated. With the proposed approach, the pinhole camera model and a simple camera calibration method were described. The study of this work was motivated by developing a perspective correction method useful for a camera document scanning application. Several experimental results illustrate the analyzed theoretical aspects. The proposed approach could be a good starting point to introduce inexperienced students in the scientific discipline of computer vision.

Appendices

Appendix A:

Estimation of the Homography Matrix

In this appendix, we illustrate the method known as direct linear transformation for homography matrix estimation. This method is very useful for illustration purposes because of its simplicity. However, the highest accuracy and robustness are reached with other advanced methods available in the literature.9,13

Let G be the homography matrix defined in Eq. (86). Consider that the matrix G is row partitioned as follows:

Eq. (120)

G=[g11g12g13g21g22g23g31g32g33]=[g¯1Tg¯2Tg¯3T],
where

Eq. (121)

g¯1T=[g11g12g13],g¯2T=[g21g22g23],g¯3T=[g31g32g33].

Equation (85), which relates points of the reference and image planes, can be rewritten as

Eq. (122)

μ=[uv]=1g¯3TH[ρ][g¯1TH[ρ]g¯2TH[ρ]],
or

Eq. (123)

[ug¯3TH[ρ]vg¯3TH[ρ]]=[g¯1TH[ρ]g¯2TH[ρ]].

Furthermore, Eq. (123) can be written in matrix form as

Eq. (124)

Ag¯=02,
where

Eq. (125)

A=[H[ρ]T03TuH[ρ]T03TH[ρ]TvH[ρ]T],g¯=[g¯1Tg¯2Tg¯3T]T.

Equation (124) relates a single point ρ on the reference plane with the corresponding point μ on the image plane. If n pairs (ρk,μk), with k=1,2,n, are available, the n corresponding equations of the form Eq. (124) can be written as

Eq. (126)

Ag¯=02n,
where

Eq. (127)

A=[A1TA2TAnT]T,Ak=[H[ρk]T03TukH[ρk]T03TH[ρk]TvkH[ρk]T].

The nontrivial solution g¯ of Eq. (126) can be obtained using the constraint g¯=1. Thus, by using the singular value decomposition of A, the solution for g¯ is the right-singular vector corresponding to the smallest singular value of A, see Appendix C of Ref. 14.

The application of this method is illustrated as follows. Consider the image shown in Fig. 12(a). A letter size paper printed with the Melencolia I by Albrecht Dürer is in the scene. Using the aspect ratio 11.2941 of the letter paper, the coordinates of the corners are fixed to

Eq. (128)

ρ1=[1,1.2941]T,ρ3=ρ1,ρ2=[1,1.2941]T,ρ4=ρ2.

The coordinates of the imaged corners are

Eq. (129)

μ1=[0.2858,0.5661]T,μ2=[0.3826,0.0938]T,μ3=[0.2884,0.5403]T,μ4=[0.8479,0.1135]T,
see yellow circles in Fig. 12(a). With these four pairs (ρk,μk), we obtain the homography

Eq. (130)

G=[0.24370.22920.24420.22580.18700.08880.05240.09890.8497].

The homography G fully defines a pinhole imaging process. Thus, it can be inversed to obtain an undistorted view of the reference plane from its perspective distorted image. Specifically, using Eq. (89) all points μ of the image are transformed to points ρ of the reference plane. Then, the pixels of the image are displayed at the points ρ as shown in Fig. 12(b). Note that corners of the paper in the corrected image are at the coordinates specified by Eq. (128).

The least number of point correspondences for two-dimensional homography estimation is four. However, the accuracy of the estimation is improved when more than four point correspondences are provided. For this reason, checkerboard patterns15 and gratings16,17 are useful target objects. In this appendix, the corner points of the imaged rectangle where obtained manually from the image. However, the corner points can be obtained automatically using checkerboard patterns or gratings along with grid detection18 or phase demodulation,19 respectively.

Appendix B:

Camera Parameters from Homographies

The homography matrix involves both intrinsic K and extrinsic L camera parameters as well as the reference plane parameters Π. In this appendix, we show how to obtain the intrinsic and extrinsic camera parameters from several homographies.

B.1.

Intrinsic Camera Parameters

Consider that the reference plane is the xy-plane of the world coordinate system; i.e., s=03 and

Eq. (131)

Q¯=[100100].

In this case, the homography G, defined in Eq. (86), is reduced to

Eq. (132)

G=K[r¯1r¯2RTt],
where r¯1T and r¯2T are the first and second rows of the rotation matrix R, respectively. Consider that the matrix G is column partitioned as follows:

Eq. (133)

G=[g11g12g13g21g22g23g31g32g33]=[g1g2g3],
where

Eq. (134)

g1=[g11g21g31],g2=[g12g22g32],g3=[g13g23g33].

Thus, Eq. (132) can be written as

Eq. (135)

[r¯1r¯2RTt]=K1[g1g2g3].

Since r¯1 and r¯2 are orthonormal vectors (r¯1 and r¯2 are rows of a rotation matrix), we have the following two constraints r¯1Tr¯2=0 and r¯12=r¯22, which can be written as

Eq. (136)

g1TWg2=0,g1TWg1=g2TWg2,
where the symmetric matrix W is defined as

Eq. (137)

W=KTK1=[w11w12w13w12w22w23w13w23w33].

The bilinear form giTWgj can be rewritten as

Eq. (138)

giTWgj=Vij[G]w,
where

Eq. (139)

Vij[G]=[g1ig1jg2ig2jg3ig3jg2ig1j+g1ig2jg3ig1j+g1ig3jg3ig2j+g2ig3j]T,
and

Eq. (140)

w=[w11w22w33w12w13w23]T.

Then, the constraints given by Eq. (136) become

Eq. (141)

V[G]w=02,
where V[G] is the following 2×6 matrix:

Eq. (142)

V[G]=[V12[G]V11[G]V22[G]].

A nontrivial solution of Eq. (141) for w can be obtained using several homographies Gk, k=1,2,,m. For this, we compute the homographies of different images where the position and orientation of the reference plane (or the camera, or both) are varying (in an unknown manner) while the intrinsic camera parameters remain constant. Thus, we solve the new matrix equation

Eq. (143)

Vw=02m,
where

Eq. (144)

V=[V[G1]TV[G2]TV[Gm]T]T.

In general, at least three homographies (m=3) are required. However, two homographies are sufficient assuming zero-skew.

Equation (143) can be solved for w using the singular value decomposition method, see Appendix C of Ref. 14. Since the obtained solution, labeled as w˜, is unique up to scale, the associated matrix W˜ is related to W by

Eq. (145)

W˜=λW=λKTK1,
where λ0 is an unknown constant. With the estimated matrix W˜, the unknown scalar λ and the entries kij of the intrinsic parameter matrix

Eq. (146)

K=[k11k12k130k22k23001],
are given in closed form as

Eq. (147)

λ=(detW˜)/d,k11=λ/w˜11,k22=λw˜11/d,k12=w˜12λ/w˜11d,k13=(w˜12w˜23w˜22w˜13)/d,k23=(w˜12w˜13w˜11w˜23)/d,
where d=w˜11w˜22w˜122.

It is worth mentioning that the intrinsic camera parameters (f, sx, sy, τx, τy, and σ) cannot be obtained using only the matrix K. Fortunately, the matrix K is sufficient for many computer vision tasks. For the case where the intrinsic camera parameters are required explicitly, we can assume that the skew and size of the pixel are known (e.g., sx, sy, and σ are consulted in the datasheet of the camera sensor). Thus, the estimation of the remaining intrinsic parameters is a linear problem with the least-squares solution

Eq. (148)

f=sxsysxk22+syk11+sxsyσk12sx2+sy2+sx2sy2σ2,

Eq. (149)

τx=sxk13,

Eq. (150)

τy=syk23.

B.2.

Extrinsic Camera Parameters

Once the matrix K is available, the rotation matrix R and the translation vector t can be estimated for each provided homography as follows. First, we compute the estimate R˜T of the matrix RT as

Eq. (151)

R˜T=[h1h2h1×h2],
where using Eq. (135), the vectors h1 and h2 are given as

Eq. (152)

h1=K1g1,h2=K1g2.

Then, the rotation matrix R is obtained from R˜ ensuring the orthogonality condition of rotation matrices. For this, the singular value decomposition R˜=UΣVT is obtained and the required rotation matrix is determined as

Eq. (153)

R=UVT.

Finally, the translation vector t is computed as

Eq. (154)

t=RK1g3.

B.3.

Illustrative Example

As an example, we describe a simple experiment to obtain the intrinsic parameters of a camera. A camera with a pixel size of 6  μm (square pixel), resolution of 752×480  pixel, and imaging lens with focal length of 6 mm was used. The 3×3 checkerboard pattern shown in Fig. 13(a) was printed on a letter paper. Then, 15 images of the printed pattern lying on the reference plane were captured from different unknown viewpoints, see Figs. 13(b)13(i).

We use the coordinates of the corners shown in Fig. 13(a) as the known points ρk on the reference plane. The corresponding points μk in the image plane were obtained by marking the corners of the checkerboard pattern in the image. Then, with the pairs (ρk,μk), an homography matrix Gk was computed for each acquired image. With these homographies, the matrix V defined in Eq. (144) was created. Then, Eq. (143) was solved for w, the resulting matrix W˜ is

Eq. (155)

W˜=[0.13890.00050.00580.00050.13780.00080.00580.00080.9806].

From this, the intrinsic parameter matrix K was recovered as

Eq. (156)

K=[2.65630.01030.041902.66740.0059001].

For validation purposes, we estimate the focal length using the known information sx=sy=6  μm, and σ=0. The reader should note that the quantities sx and sy are defined in this experiment as

Eq. (157)

sx=sy=75226×103  mm
because the points μ were obtained in a coordinate system with a unit of length equal to a half of the image width, see Figs. 13(b)13(i). From the matrix K in Eq. (156), the focal length was estimated using Eq. (148). The result is f=6.0032  mm, which is very close to the nominal focal length (6 mm) of the employed camera lens.

Fig. 12

(a) Image of 1168×2080  pixel capturing a scene with the Melencolia I printed on a letter size paper. (b) Perspective corrected Image.

OE_56_7_070801_f012.png

Fig. 13

(a) A 3×3 checkerboard pattern and the coordinates of the corners. (b)–(i) 8 of 15 images of the reference plane acquired by a camera from different unknown viewpoints.

OE_56_7_070801_f013.png

Acknowledgments

This work was supported by CONACyT México through the project Cátedras/880.

References

1. 

O. Faugeras, Q.-T. Luong and T. Papadopoulo, The Geometry of Multiple Images: the Laws that Govern the Formation of Multiple Images of a Scene and Some of Their Applications, MIT Press, Cambridge (2004). Google Scholar

2. 

O. D. Faugeras and S. Maybank, “Motion from point matches: multiplicity of solutions,” Int. J. Comput. Vision, 4 (3), 225 –246 (1990). http://dx.doi.org/10.1007/BF00054997 IJCVEQ 0920-5691 Google Scholar

3. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 1330 –1334 (2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

4. 

Y. Zhao and Y. Li, “Camera self-calibration from projection silhouettes of an object in double planar mirrors,” J. Opt. Soc. Am. A, 34 696 –707 (2017). http://dx.doi.org/10.1364/JOSAA.34.000696 JOAOD6 0740-3232 Google Scholar

5. 

T. Taketomi et al., “Camera pose estimation under dynamic intrinsic parameter change for augmented reality,” Comput. Graphics, 44 11 –19 (2014). http://dx.doi.org/10.1016/j.cag.2014.07.003 Google Scholar

6. 

H. H. Ip and Y. Chen, “Planar rectification by solving the intersection of two circles under 2D homography,” Pattern Recognit., 38 (7), 1117 –1120 (2005). http://dx.doi.org/10.1016/j.patcog.2004.12.004 PTNRA8 0031-3203 Google Scholar

7. 

B. Cyganek and J. P. Siebert, An Introduction to 3D Computer Vision Techniques and Algorithms, John Wiley & Sons Ltd., Chichester, West Sussex (2009). Google Scholar

8. 

O. Faugeras, Three-Dimensional Computer Vision: a Geometric Viewpoint, MIT Press, Cambridge (1993). Google Scholar

9. 

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge (2003). Google Scholar

10. 

J. L. Mundy and A. Zisserman, Appendix—Projective Geometry for Machine Vision, 463 –519 MIT Press, Cambridge (1992). Google Scholar

11. 

W. Burger, “Zhang’s camera calibration algorithm: in-depth tutorial and implementation,” Hagenberg, Austria (2016). Google Scholar

12. 

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vision Appl., 13 (1), 14 –24 (2001). http://dx.doi.org/10.1007/PL00013269 MVAPEO 0932-8092 Google Scholar

13. 

H. Zeng, X. Deng and Z. Hu, “A new normalized method on line-based homography estimation,” Pattern Recognit. Lett., 29 (9), 1236 –1244 (2008). http://dx.doi.org/10.1016/j.patrec.2008.01.031 PRLEDG 0167-8655 Google Scholar

14. 

Z. Zhang, “A flexible new technique for camera calibration,” (1998). Google Scholar

15. 

L. Kr, “Accurate chequerboard corner localisation for camera calibration,” Pattern Recognit. Lett., 32 (10), 1428 –1435 (2011). http://dx.doi.org/10.1016/j.patrec.2011.04.002 PRLEDG 0167-8655 Google Scholar

16. 

R. Juarez-Salazar et al., “Camera calibration by multiplexed phase encoding of coordinate information,” Appl. Opt., 54 4895 –4906 (2015). http://dx.doi.org/10.1364/AO.54.004895 APOPAI 0003-6935 Google Scholar

17. 

R. Juarez-Salazar, L. N. Gaxiola and V. H. Diaz-Ramirez, “Single-shot camera position estimation by crossed grating imaging,” Opt. Commun., 382 585 –594 (2017). http://dx.doi.org/10.1016/j.optcom.2016.08.041 OPCOB8 0030-4018 Google Scholar

18. 

A. Herout, M. Dubská and J. Havel, Vanishing Points, Parallel Lines, Grids, 41 –54 Springer, London (2013). Google Scholar

19. 

R. Juarez-Salazar, F. Guerrero-Sanchez and C. Robledo-Sanchez, “Theory and algorithms of an efficient fringe analysis technology for automatic measurement applications,” Appl. Opt., 54 5364 –5374 (2015). http://dx.doi.org/10.1364/AO.54.005364 APOPAI 0003-6935 Google Scholar

Biographies for the authors are not available.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
Rigoberto Juarez-Salazar and Victor H. Díaz-Ramírez "Operator-based homogeneous coordinates: application in camera document scanning," Optical Engineering 56(7), 070801 (27 July 2017). https://doi.org/10.1117/1.OE.56.7.070801
Received: 11 May 2017; Accepted: 7 July 2017; Published: 27 July 2017
Lens.org Logo
CITATIONS
Cited by 25 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top