|
1.IntroductionThe development of in vivo laparoscopic cameras, which can be inserted in the abdominal cavity through a small incision and provide real-time visual feedback for surgeons, has been an important on-going research topic to improve medical performance for minimally invasive surgery (MIS). Compared with conventional rigid long-stick laparoscopic video systems, the benefits of employing in vivo laparoscopic cameras include: (1) better triangulation capabilities and wider field of view (FOV);1,2 (2) less internal/external collisions with other surgical instruments;3–5 (3) unrestricted intra-abdominal manipulation and more intuitive control;6–8 and (4) no need of a dedicated port that reduces abdomen tissue damages.9 Although in vivo laparoscopic cameras feature the advantages mentioned above, they are still in their infancy for real MIS tasks. One of the major issues that impedes the in vivo cameras from being practical is their inferior imaging performance due to three main reasons. First, the sizes of imaging sensors and optical lenses applied in in vivo cameras are limited by the compact dimensions of the cameras. It is a great challenge to achieve imaging resolutions comparable to the off-the-shelf conventional laparoscopic cameras, such as Stryker HD 3-Chip series cameras,10 the Olympus 4K Camera,11 etc. Second, lighting systems play a crucial role in determining the quality of surgical images beyond the imaging sensors themselves. The state-of-the-art in vivo laparoscopic cameras employ bare LEDs or LEDs combined with poorly designed reflectors. The uncontrolled light beams therefore either waste most of their energy to illuminate areas outside the camera’s FOV6 or result in bright center and dark margins in the plane of an imaging sensor.2,5 In contrast, the conventional laparoscopic video systems introduce external xenon/LED light sources into abdominal cavities via fiber-optics inside the rods. Illumination optical designs are usually only applied to the rods to improve energy efficiency and illuminance uniformity.12 Last, due to the commercialization of conventional laparoscopic video systems, image processing software is well developed with sophisticated image enhancement techniques, which further increase the gap between in vivo laparoscopic cameras and conventional laparoscopic video systems. The objective of this paper is to propose a solution for addressing the illumination issues existing in the state-of-the-art in vivo laparoscopic cameras. We aim at pushing in vivo laparoscopic cameras to take one step forward toward practical use in MIS. There are two major challenges for achieving this goal. The first challenge is the deployment of light sources and nonimaging optical lenses in an in vivo laparoscopic camera, whose outer diameter is limited by a trocar’s inner diameter (3 to 30 mm13). Considering the common deployment of LEDs that surrounds an imaging sensor, it will be difficult to install an additional optical system for the LEDs in such a small area for uniform illumination. In addition, the coaxial configuration of an imaging sensor and light sources results in the lack of shadow depth cues in output two-dimensional (2-D) images. In some cases, shadow depth cues are desired by surgeons to compensate for degraded visual information and improve surgical performance.14,15 The second challenge is the design of nonimaging optics for the light sources. The lighting system should satisfy the following requirements: (1) uniform illuminance distribution on a target surgical area; (2) high optical efficiency, which means maximally projecting light rays only inside a camera’s FOV; and (3) compact design to fit in the limited space of an in vivo laparoscopic camera. In this paper, we propose a transformable design of an in vivo laparoscopic camera system that is able to carry well-designed freeform optical lenses. We also develop an effective freeform optical lens design method for the LEDs to achieve desired illumination on target surgical areas. As conceptually shown in Fig. 1, the device is delivered into the abdominal cavity with the folded mode, as shown in Fig. 1(b). The device transformation is activated to expose the lighting system and the imaging system in the abdominal cavity after the device is magnetically anchored inside the abdominal wall, as shown in Fig. 1(c). It is a crucial task to design freeform optical lenses that can be harbored on the wings [Fig. 1(a)-⑧] and can meet the lighting requirements discussed previously. To generate a smooth freeform optical surface, the integrability condition of optical surface normal vectors16 needs to be enforced. Although the nonstandard Monge–Ampere equation, which governs the freeform lens design problem,17 can guarantee such integrability condition, it is very difficult to compute a convergent solution. Instead, we propose an efficient ray-mapping-based method to generate a smooth freeform surface for the lighting system. A ray map between the light source and the target area can be formulated in an Monge–Kantorovich problem and governed by a standard Monge–Ampere equation. We introduce an effective numerical method to solve the standard Monge–Ampere equation. This method employs a sequence of higher order quasilinear PDEs to approximate the solution of the lower order nonlinear PDEs (the standard Monge–Ampere equation).18,19 Based on a computed ray map, an initial optical surface can be constructed by Snell’s law.20 This initial surface construction method suffers accumulated errors on the surface’s normal vectors. For improving the optical design performance, we propose an iterative optimization technique to correct the initial surface. To reduce the distortion from extended sizes of LEDs, we employ a feedback modification method to improve illuminance uniformity. Our proposed freeform optical design method features an easy-to-implement numerical solver, fast convergent speed, and optimized optical performance that have been verified using an optical design software package. This method is not only very effective to design the lighting system for the in vivo laparoscopic camera but it also can serve as a general purpose freeform surface design method for other related applications. 2.MethodIn this section, we first briefly describe our in vivo robotic laparoscopic system design, its application scenario, and discuss the design requirements of the lighting system. Then, we elaborate our proposed freeform optical design method to achieve the lighting requirements. 2.1.Configuration of In Vivo Laparoscopic SystemFigure 1 shows the configuration of the in vivo laparoscopic system for MIS, which consists of a camera module (a)-⑥, a lighting system (a)-⑧, actuation mechanisms for the robot’s transformation (a)-④⑤ , and orientation control (a)-②③, and a magnetic anchoring unit (a)-① paired with an external magnetic holder for affixing the robot against the inner side of an abdominal wall. The outer diameter of the folded mode [Fig. 1(b)] is designed as 17 mm, which can fit in a trocar with 20 mm sleeve diameter.21 An external magnetic holder navigates the inserted robotic platform to a desired location and anchors it inside the abdominal wall, as shown in Fig. 1(c). Then, the wings [Fig. 1(a)-⑦] are extended to expose the camera module and the lighting system inside the abdominal cavity for visualizing the surgical area. Imaging data are transmitted to display on a monitor screen for guiding surgical tasks. 2.2.Design Requirements of Lighting SystemTo help with clearly discussing the design requirements of the lighting system, we use the CMOS imaging sensor OV785022 and a pin-hole lens23 with FOV 48 deg in horizontal and 36 deg in vertical as a benchmark setup. The selected imaging sensor is with a sensitivity of , an imaging array size of , and a pixel size of . To enable a comparable camera system in proper working condition, a maximum illuminance of 4000 lx on the target illuminated area is operated at a distance of 50 mm.6 The number of light rays entering the imaging sensor heavily depends on the form of an illuminated area and is difficult to characterize and control. To ensure sufficient light that enters the imaging sensor, we conservatively require the minimum illuminance to be 10,000 lx at a distance of 100 mm. The illumination radius is set as 80 mm to contain the camera’s FOV when the camera-to-target distance is 100 mm. To uniformly achieve the illuminance of 10,000 lx within the area, the minimum total luminous flux of the lighting system is calculated to be 200.96 lm. The lighting requirements are summarized in Table 1. Table 1Specifications of lighting requirements.
2.3.Problem Formulation of Optical Lens Design for Lighting SystemBenefiting from the state-of-the-art high efficiency tiny LEDs, e.g., Cree Xlamp XQ-E (, 128 lm at 2.9 V and 350 mA), Nichia NCSWE17A (, 118 lm at 3.0 V and 350 mA), only one or two of such LEDs can satisfy the luminous flux requirements in Table 1. Optical lenses are considered to have better performance on light beam control than reflectors.24 Our proposed in vivo laparoscopic lighting system uniquely provides the feasibility to integrate optical lens with the LEDs for highly efficient, uniform illumination on target surgical areas. Figure 2 conceptually shows the configuration of the lighting system. Three LEDs are separately installed on three wings to compensate for energy loss in the lighting system. One actuator drives the wings to reach a common opening angle . For each LED, all the light rays should be redirected and uniformly distributed on the illuminated area by applying a freeform optical lens, as shown in Fig. 2(a). In addition, all the freeform lenses should be able to fit in the folded mode, as shown in Fig. 2(b). Therefore, the key problem of this research can be recognized as how to design a proper freeform optical lens for each LED to redirect the light beams to be uniformly distributed on a target surgical area. In this paper, we contribute an effective method to solve this problem, which is detailed in the following sections. 2.4.Related Work of Freeform Optical Lens DesignThe freeform optical design problem is usually approached under the assumption of using a zero-étendue source (point source) and governed by a nonstandard Monge–Ampere equation,17,16,25,26 which is a second-order nonlinear PDE. However, due to the high nonlinearity of the nonstandard Monge–Ampere equation, it is very difficult to find an effective and easy-to-implement numerical method to compute a convergent solution. Alternative methods are developed to avoid solving the nonstandard Monge–Ampere equation directly, such as the supporting quadratics methods.27–29 However, as the number of quadratics grows, the computational cost goes up quickly. The ray-mapping method is another effective way to design freeform lenses,20,30 which basically follows two steps: (1) ray map generation and (2) optical surface construction. The main challenge in the ray-mapping method is to enforce integrability conditions on normal vectors of constructed optical surfaces. This ray-mapping requirement can be formulated in an Monge–Kantorovich problem, which can be equivalently represented by a standard Monge–Ampere equation. 31 The ray-mapping computation methods, however, are either hard to numerically implement32,33 or tricky to get a convergent solution.31,34 In this paper, we contribute an effective ray-mapping method for freeform lens design. This method features fast convergence speed and easy numerical implementation. Based on the computed ray map, a freeform optical surface is constructed by the following procedures, which include an initial freeform optical surface construction, correction of normal vectors on the freeform surface, and feedback modification of desired target region illuminance distribution. The detailed design method is elaborated in the following sections. 2.5.Design Framework of Freeform Optical LensFigure 3 shows the framework of our proposed freeform optical lens design method for the in vivo laparoscopic lighting system. The design requirements, the LED’s luminous intensity distribution , and the desired illuminance distribution on a target region are initialized as the design inputs. The ray-mapping method proposed in Sec. 2.6 requires both the LED and the illuminated area to be represented in the forms of illuminance distributions. The conversion of representing the LED from luminous intensity distribution to illuminance distribution is detailed in Sec. 2.7. According to the computed ray map, an initial freeform optical surface is constructed in Sec. 2.8.1. To reduce accumulated errors of normal vectors on the freeform surface, a freeform surface correction method is proposed in Sec. 2.8.2. Due to the fact that the LEDs are extended light sources, the solution derived in the initial design will be degraded since a point light source is assumed in the design of freeform optical lenses. A feedback modification procedure introduced in Sec. 2.8.3 is employed to address this issue. A few iterations of the feedback modification are required to generate the final design of the freeform optical lens. 2.6.Ray-Mapping MethodLet and represent the LED’s illuminance distribution and the prescribed target illuminance distribution, respectively. As shown in Fig. 4, our objective is to find the ray-mapping function that transfers to , where and are Cartesian coordinates confined in the source domain and the target domain . The above statements are recognized as a special case of the Monge–Kantorovich problem. Under the assumption that there is no energy lost in transport, should satisfy According to the mapping , Eq. (1) can be represented as Brenier’s theorem35 states that there exists a unique solution to an Monge–Kantorovich problem, which can be characterized as the gradient of a convex potential . Substitute in Eq. (2), then is a solution of the standard Monge–Ampere equation2.6.1.Method for solving standard Monge–Ampere equationIt is observed that a weak solution of a lower order nonlinear PDE can be approximated by a sequence of higher order quasilinear PDEs.18 To approximate the solution of a standard Monge–Ampere equation, which is a second-order nonlinear PDE, a biharmonic operator with fourth-order partial derivatives is a good option.19 The approximated solution of Eq. (3) can thus be computed from where and is a moment solution if the limit exists. The inner points of should satisfy Eq. (4). The points on the boundary of should be mapped to the boundary of . According to , a Neumann boundary condition (BC) can be formulated as where is the mathematical representation of the shape of . In combining Eqs. (4) and (5), the ray map can be computed from the following quasilinear PDE and the Neumann BC:2.6.2.Numerical technique for computing ray mapThe ray map , which is governed by Eq. (6), is computed by the procedures shown in Fig. 3 “ray map generation.” The main idea of the proposed numerical technique is to iterate the approximated by updating . To be specific, is set as a sequence of gradually reduced constant values, e.g., 1, , , and so on. In each iteration, an initial guess is first provided either by the output of the last iteration or manually selected (in the first iteration). The number of iterations depends on the number of in the sequence. We can start the iteration with to approximate the solution of Eq. (3). When approaches , Eq. (4) is equivalent to Eq. (3). But it does not mean that the best approximated solution can be found when we finalize as 0 in the iteration procedure. The error of is bounded by where denotes the numerical solution of Eq. (6) with a mesh size . The finalized value of in Eq. (6) is related to for achieving optimized convergent speed and minimum error. This relationship depends on the norm to be used. According to the numerical experiments analyzed in Ref. 36, the minimum global error can be achieved when , .To numerically discretize Eq. (6), the quasilinear PDE and the BC are reformulated as The discretization of first-order and second-order partial derivatives in Eq. (8) adapts the central finite difference method on the inner region of , and the forward/backward finite difference method on the boundary region with second-order truncation errors. The discretization of the biharmonic term in Eq. (8) can be achieved by a 13-point stencil37 where we represent as for short. However, undefined points are introduced when the near-boundary points are discretized by using the 13-point stencil in Eq. (9). Figure 5 demonstrates an example that the center of a 13-point stencil (marked by the red dot) locates in the near-boundary region. In this case, and (marked by the gray dots) are outside the source region . The approximations of undefined , , , are required, which can be computed using the following formulas: where denotes a near-boundary value in the grid of ; is the mesh size in both and directions; , , , are first-order partial derivatives on , which can be determined by the BC in Eq. (8).The numerical discretization of Eq. (8) results in a set of nonlinear equations that can be represented in the form of where denotes a vector of variables . The Newton’s method is chosen as the numerical solver [Fig. 3(d)] to compute [Fig. 3(e)]. Figure 3(f) compares the in the current iteration with . If , the initial and in Fig. 3(a) will be updated with the computed and a decreased . Otherwise, the final ray map [Fig. 3(h)] will be computed using the gradient of the numerical solution from the current iteration.2.7.Luminous Intensity to Illuminance ConversionThe ray-mapping technique proposed in the previous section requires to use the illuminance distribution of the LED. However, the LED employed in this work is a Lambertian light source, which is usually described by a luminous intensity distribution in a hemispherical space. denotes the polar angle of light ray, and represents the luminous intensity at . We apply the stereographic projection method17 to convert the source’s luminous intensity to the illuminance distribution, which is defined on a plane. The main idea of this method is to project the light energy with an emitting direction onto the plane at coordinates , as shown in Fig. 6. The final form of the illuminance distribution on the plane is represented as where . For the grid points , we define .2.8.Freeform Optical Surface Construction2.8.1.Initial surface constructionBased on the computed ray map, each pair of coordinates in on the source grid can be mapped to a point in on the target plane, where and represent discretization indices of the light source. According to the rotation matrix and the translational vector between and , can be represented as in , as shown in Fig. 7(b). A unit incident ray vector from the light source is defined by , where , , and are functions of . We employ an easy-to-implement surface construction method20 to design an initial optical surface for the light source. The main idea of this method is to first construct one curve with a sequence of points , as shown in Fig. 7(a)-①. Then, the generated curve is used to compute the surface points along the direction in Fig. 7(a)-②. As shown in Fig. 7(a), we define as a unit out-going ray from the optical surface, and formulate it as where denotes a point to be constructed on the surface. In Fig. 7(a)-①, considering the desired lens dimensions, an initial point can be manually selected according to a desired lens volume. Thus, is calculated with Eq. (13). The normal vector at can be computed by Snell’s law where denotes the refractive index of the medium surrounding the lens, and represents the refractive index of the lens. The coordinates of the next point on the curve is computed by solving the intersection point between the light ray and the plane defined by and . The curves in direction ② can be computed using the points on the first curve as initial points.2.8.2.Correction of surface normal vectorsAlthough this method provides an easy way to construct the freeform surface with required lens dimensions, it cannot guarantee that the computed normal vector at are perpendicular to the vectors between and its adjacent points , , as shown in Fig. 7(b) due to accumulated errors. To address this problem and improve the illumination performance, we introduce an iterative optimization technique to correct the constructed initial surface for better fitting the normal vectors.38 Ideally, if the surface mesh is fine enough, a surface point and the normal vector at this point should satisfy the following constraints: Assume the optical surface is constructed by points. By substituting with in Eqs. (15) and (16), we have constraints where , denotes the distance between and the surface point . The nonlinear least-squares method is employed to minimize with as variables. Updated normal vectors are computed according to Eq. (14) using the optimized in the current iteration and the ray map. Iterations proceed to compute new until the computed surface points satisfy the convergence condition , where represents the current iteration number, and is the stopping threshold value. Finally, the optical surface can be represented using the freeform surface points with nonuniform rational basis spline.392.8.3.Feedback modificationDue to the zero-étendue source assumption, illuminance uniformity will be degraded using the LEDs with the extended sizes, especially in the case of designing small-volume optical lenses. This issue can be mitigated by employing a feedback modification method.40,41 Denote as the desired illuminance distribution on a target region, as the simulation result of illuminance distribution by applying the freeform lenses. The modified desired illuminance distribution for the next iteration can be defined as As shown in the design framework in Fig. 3, illumination performance is evaluated in each iteration to check if a satisfiable illuminance uniformity is achieved. If yes, the freeform optical lens design is completed. Otherwise, another iteration will be executed to modify the surface of the freeform lenses. 3.Results3.1.Evaluation of Freeform Optical Lens Design MethodIn this section, we evaluate the performance of the freeform optical lens design method for the in vivo laparoscopic lighting system. Figures 8(a) and 8(b) show on-axis and off-axis tests, which were conducted separately using an optical design software package (TracePro, Lambda Research Corp.) to investigate the effectiveness of our optical design method in different application scenarios. We employed polymethyl methacrylate (PMMA) as the lens material with a refractive index of 1.49, and Nichia NCSWE17A LEDs42 with a luminous flux of 118 lm. To verify that our proposed method is flexible and capable of designing freeform optical lenses for illuminating target areas with different patterns, we set the target area with a circular pattern and a square pattern for the on-axis illumination tests. The detailed specifications are summarized in Table 2. Table 2Evaluation specifications of the freeform optical design method.
3.1.1.Ray map computationWe first convert the luminous intensity distribution of the LED [Fig. 8(c)] to a normalized illuminance distribution [Fig. 8(d)]. The computation domain of the LED , , is then discretized by mesh grids. The mesh size determines the minimum according to our ray-mapping algorithm. As shown in Fig. 9, we selected a sequence of as 1, 0.5, 0.025 to approximate the numerical solution of the ray maps. For validating the effectiveness of the ray map generation method, we demonstrate the intermediate ray map results that are computed with . The ray maps computed with are used to generate initial surfaces of freeform optical lenses for the LEDs. Figure 10 shows the convergent speeds of our proposed ray map generation method. The convergent speeds are characterized by residual values of in Eq. (11) and iteration times. The unit of residual value of Eq. (11) is millimeter. Considering that the state-of-the-art highest manufacturing accuracy for a freeform optical lens is at the level of submicrometer (), we conservatively set the convergence threshold at subnanometer (). We observed that in all the tests, can achieve the values of order within 10 iterations. 3.1.2.On-axis tests of freeform optical lens designFigure 8(a) shows the simulation setup of on-axis tests for the freeform optical lens design. A circular target region with a radius of 80 mm and a square target region with a side length of 160 mm are employed for the on-axis tests. The illumination distance from the LED to the target region center is set as . Figures 11(a) and 11(b) demonstrate the designed lens profiles with labeled dimensions. Figures 11(c) and 11(d) show the simulated illuminance distributions on the target regions. The optical efficiencies of the freeform lenses are 88.3% and 90.5%, respectively, with considering Fresnel losses. The illuminance uniformities can be quantified by where and are the standard deviation and mean of collected illuminance data, respectively. The optical performance of the on-axis tests is detailed in Table 3.Table 3Optical performances of on-axis tests.
3.1.3.Off-axis tests of freeform optical lens designThe simulation setup of the off-axis tests is shown in Fig. 8(b). The illuminated region is set as a circular region with a radius of 80 mm. The distance from the LED to the target plane is set as . Axis offsets , 10 mm, and 15 mm are introduced to evaluate the optical performance when the LED’s axis and the target region’s axis are not coincided. To construct freeform optical surfaces in this more generalized case, a transformation matrix is required to convert the ray map from the global coordinates to the LED’s local coordinates. Figure 12 shows the designed lens profiles and the simulated illuminance results for each case. Due to the axis offsets, the optical lenses are no longer symmetric. So, we provide front and side views of the lenses, as shown in Figs. 12(a), 12(d), and 12(g). Figures 12(b), 12(e), and 12(h) show the simulated illuminance distributions on the circular target region. The optical efficiencies of the freeform lenses are 88.06%, 87.74%, and 88.15%, respectively with considering Fresnel losses. Figures 12(c), 12(f), and 12(i) show the illuminance uniformities along horizontal and vertical directions in the lighting regions. We summarize the optical performance of the off-axis tests in Table 4. Table 4Optical performance of off-axis tests.
3.2.Integration and Evaluation of In Vivo Laparoscopic Lighting SystemHitherto, we have verified the effectiveness of our proposed freeform optical lens design method. In the following, we demonstrate the evaluation of the in vivo laparoscopic lighting system to achieve desired lighting requirements indicated in Table 1. 3.2.1.Final design of LEDs’ freeform optical lensesRecall the configuration of the lighting system in Fig. 2. The lens installation position on the wings is set as 20.5 mm. The open angle of the wings is set as for the extended mode. In the design, we set the lens volume with the maximum radial length of 5.4 mm that guarantees the three lenses can fit in the robotic camera. The initial illumination distance is set as 100 mm. The radius of the target circular area is set as 80 mm. The specifications of the freeform optical lens design for the laparoscopic lighting system are summarized in Table 5. Table 5Specifications of the lighting system setup.
Figure 13 shows the three-dimensional (3-D) design of the in vivo laparoscopic lighting system. Figure 13(a) shows the three views of the freeform lens. Figure 13(b) demonstrates the compactness of the lens, which satisfies the lens volume restriction. Figure 13(c) shows the integration of a lens and an LED in one wing. Figure 13(d) shows the 3-D structure of an assembled laparoscopic lighting system. 3.2.2.Lighting performance on target regionWe evaluate the performance of the developed lighting system in accordance with the simulation setup in Table 5. Due to the symmetric arrangement of the three wings, a single LED is first energized, which emits light rays through its freeform lens. Figure 14(a) shows the illuminance distribution on the target region. Considering Fresnel losses, the optical efficiency of the designed freeform lens is 89.45%, which means that for each LED 105.55 lm out of the total 118 lm luminous flux is successfully projected onto the desired target region. The average illuminance provided from the single LED is 5473.8 lx. By using the illuminance data shown in Fig. 14(b), the horizontal and vertical illuminance uniformities are computed as 95.87% and 94.78%, respectively, by Eq. (19). Figure 14(c) shows the illuminance distribution on the target region when all the LEDs are energized. In this case, the total luminous flux provided from the lighting system is 354 lm, whereas the total luminous flux falling on the target region is 316.58 lm. The optical efficiency is 89.43%. The average illuminance on the target region is 12,441 lx. Figure 14(d) shows that the illuminance uniformities along horizontal and vertical directions are 96.33% and 96.79%, respectively. Figure 14(e) demonstrates the illuminance distribution on the target region with 3-D profile for better illustration. We summarize the evaluation results of the lighting performance in Table 6. It is obvious that the developed in vivo laparoscopic lighting system satisfies all of the design requirements imposed by Table 1. Table 6Performance evaluation results of the lighting system.
3.2.3.Refocusing of light beamsIn MIS, after inserting the in vivo laparoscopic system inside the abdominal cavity, the distance between the camera and a target surgical area might be shorter than 100 mm. Although the lighting system with the wings’ angle at can still provide good illumination in that region, the illuminance uniformity will be degraded, and more energy will be wasted outside the camera’s FOV. Our proposed in vivo laparoscopic lighting system features a refocusing function, which is capable of controlling the light beams by adjusting the wings’ angle to uniformly illuminate the target area within the camera’s FOV when the camera-to-target distance changes, as shown in Fig. 15(a). For instance, we set the desired target region with (thick green line). When the wings’ angle is set to be 80 deg, the illuminated region is covered by the yellow lines. This value works best for . To refocus the light on the target region when , we decrease the wings’ opening angle from to . We determine the value of using the included angle between the green dashed arrow and the yellow dashed arrow. According to the geometry of this setup, is calculated to be 6 deg. Similarly, to illuminate the target region with , the wings’ angle should be decreased by from the initial angle . Figures 15(b)–15(e) show the illuminance distributions by refocusing the light beams for the target planes at and . In the case of Figs. 15(b) and 15(c), is set at 74 deg. The average illuminance in the circular region with a radius of 48 mm is calculated as 45,823 lx. The optical efficiency is about 92% with considering Fresnel losses. The illuminance uniformities along horizontal and vertical directions are 98.29% and 98.22%. While in the case of Figs. 15(d) and 15(e), is set at to illuminate the target area with . The average illuminance in the circular region with a radius of 64 mm is calculated as 24,172 lx. The optical efficiency is 90.9% with considering Fresnel losses. The horizontal and vertical illuminance uniformities are 95.37% and 95.98%, respectively. The lighting performance of the refocused light beams is summarized in Table 7. Table 7Lighting performance of the light refocusing tests.
4.ConclusionIn this paper, we propose an innovative transformable in vivo laparoscopic lighting system design, which is able to carry well-designed freeform nonimaging optical lenses for providing high illuminance uniformity and high optical efficiency in a designated surgical area. Depending on the distance from the on-board camera to a surgical area, illuminated regions can be adjusted by changing the wings’ opening angle without affecting the illuminance uniformity. To design freeform optical lenses, we present a ray-mapping-based method to construct freeform optical surface. A ray map that governed by a standard Monge–Ampere equation is efficiently computed by introducing a biharmonic operator in the PDE. An initial optical surface is constructed by Snell’s law based on the generated ray map. To correct accumulated errors on the initial optical surface and improve the degraded illumination uniformity caused by the extended size of LEDs, we employ a surface optimization method and a feedback modification method. Simulation verifications validated that our proposed freeform optical lens design method features fast convergence speed of ray map generation ( can achieve the value of within 10 iterations), high illuminance uniformity (above 95% in average), and high optical efficiency with considering Fresnel losses (above 89% in average). DisclosuresThe authors have no relevant financial interests in this article and no potential conflicts of interest to disclose. AcknowledgmentsReza Yazdanpanah Abdolmalaki was supported by the National Science Foundation (ECCS-1309921). We would like to give special thanks to Professor Xiaobing Feng in the Department of Mathematics at the University of Tennessee, Knoxville, for discussing numerical solvers of Monge–Ampere equation. ReferencesT. Hu et al.,
“Insertable surgical imaging device with pan, tilt, zoom, and lighting,”
Int. J. Rob. Res., 28
(10), 1373
–1386
(2009). http://dx.doi.org/10.1177/0278364908104292 IJRREL 0278-3649 Google Scholar
C. A. Castro et al.,
“A wireless robot for networked laparoscopy,”
IEEE Trans. Biomed. Eng., 60
(4), 930
–936
(2013). http://dx.doi.org/10.1109/TBME.2012.2232926 IEBEAX 0018-9294 Google Scholar
S. Platt, J. Hawks and M. Rentschler,
“Vision and task assistance using modular wireless in vivo surgical robots,”
IEEE Trans. Biomed. Eng., 56
(6), 1700
–1710
(2009). http://dx.doi.org/10.1109/TBME.2009.2014741 IEBEAX 0018-9294 Google Scholar
B. S. Terry et al.,
“An integrated port camera and display system for laparoscopy,”
IEEE Trans. Biomed. Eng., 57
(5), 1191
–1197
(2010). http://dx.doi.org/10.1109/TBME.2009.2037140 IEBEAX 0018-9294 Google Scholar
J. Cadeddu et al.,
“Novel magnetically guided intra-abdominal camera to facilitate laparoendoscopic single-site surgery: initial human experience,”
Surg. Endoscopy, 23
(8), 1894
–1899
(2009). http://dx.doi.org/10.1007/s00464-009-0459-6 Google Scholar
M. Simi et al.,
“Magnetically activated stereoscopic vision system for laparoendoscopic single-site surgery,”
IEEE/ASME Trans. Mechatronics, 18
(3), 1140
–1151
(2013). http://dx.doi.org/10.1109/TMECH.2012.2198830 IATEFW 1083-4435 Google Scholar
X. Liu, G. J. Mancini and J. Tan,
“Design of a unified active locomotion mechanism for a capsule-shaped laparoscopic camera system,”
in IEEE Int. Conf. on Robotics and Automation (ICRA ’14),
2449
–2456
(2014). http://dx.doi.org/10.1109/ICRA.2014.6907200 Google Scholar
X. Liu et al.,
“Design of a magnetic actuated fully insertable robotic camera system for single-incision laparoscopic surgery,”
IEEE/ASME Trans. Mechatronics, 21
(4), 1966
–1976
(2016). http://dx.doi.org/10.1109/TMECH.2015.2506148 IATEFW 1083-4435 Google Scholar
P. Swain et al.,
“Development and testing of a tethered, independent camera for notes and single-site laparoscopic procedures,”
Surg. Endoscopy, 24
(8), 2013
–2021
(2010). http://dx.doi.org/10.1007/s00464-010-0897-1 Google Scholar
“Stryker endoscopic cameras,”
(2017) http://www.stryker.com/en-us/products/Endoscopy/VisualizationandDocumentationSystems/EndoscopicCameras/index.htm March ). 2017). Google Scholar
“Olympus 4K cameras,”
(2017) http://medical.olympusamerica.com/products/VISERA-4K-UHD-System March ). 2017). Google Scholar
R. Wu, Y. Qin and H. Hua,
“Improved illumination system of laparoscopes using an aspherical lens array,”
Biomed. Opt. Express, 7
(6), 2237
–2248
(2016). http://dx.doi.org/10.1364/BOE.7.002237 BOEICL 2156-7085 Google Scholar
“Laparoscopic trocars,”
(2017) http://www.laparoscopic.md/instruments/trocar March ). 2017). Google Scholar
G. B. Hanna, A. B. Cresswell and A. Cuschieri,
“Shadow depth cues and endoscopic task performance,”
Arch. Surg., 137
(10), 1166
–1169
(2002). http://dx.doi.org/10.1001/archsurg.137.10.1166 Google Scholar
A. C. Lee et al.,
“Solid-state semiconductors are better alternatives to arc-lamps for efficient and uniform illumination in minimal access surgery,”
Surg. Endoscopy, 23
(3), 518
–526
(2009). http://dx.doi.org/10.1007/s00464-008-9854-7 Google Scholar
H. Ries and J. Muschaweck,
“Tailored freeform optical surfaces,”
J. Opt. Soc. Am. A, 19
(3), 590
–595
(2002). http://dx.doi.org/10.1364/JOSAA.19.000590 JOAOD6 0740-3232 Google Scholar
J. S. Schruben,
“Formulation of a reflector-design problem for a lighting fixture,”
J. Opt. Soc. Am., 62
(12), 1498
–1501
(1972). http://dx.doi.org/10.1364/JOSA.62.001498 JOSAAH 0030-3941 Google Scholar
M. G. Crandall and P. L. Lions,
“Viscosity solutions of Hamilton-Jacobi equations,”
Trans. Am. Math. Soc., 277
(1), 1
–45
(1983). http://dx.doi.org/10.1090/S0002-9947-1983-0690039-8 0002-9947 Google Scholar
X. Feng and M. Neilan,
“Vanishing moment method and moment solutions for fully nonlinear second order partial differential equations,”
J. Sci. Comput., 38
(1), 74
–98
(2009). http://dx.doi.org/10.1007/s10915-008-9221-9 JSCOEB 0885-7474 Google Scholar
L. Wang, K. Qian and Y. Luo,
“Discontinuous free-form lens design for prescribed irradiance,”
Appl. Opt., 46
(18), 3716
–3723
(2007). http://dx.doi.org/10.1364/AO.46.003716 APOPAI 0003-6935 Google Scholar
“FLEXIPATH Trocars, FP020,”
(2017) http://www.ethicon.com/healthcare-professionals/products/access/trocars/other March ). 2017). Google Scholar
“OV7850 CMOS sensor,”
(2017) http://www.ovt.com/download/sensorpdf/208/OmniVision_OV7850.pdf March ). 2017). Google Scholar
“Sunex imaging lens,”
(2017) http://www.optics-online.com/OOL/DSL/DSL871.PDF March ). 2017). Google Scholar
“All facts for choosing LED optics correctly,”
(2017) http://ledil.fi/sites/default/files/Documents/Technical/Articles/Article_1.pdf March ). 2017). Google Scholar
K. Brix, Y. Hafizogullari and A. Platen,
“Designing illumination lenses and mirrors by the numerical solution of Monge-Ampère equations,”
J. Opt. Soc. Am. A, 32
(11), 2227
–2236
(2015). http://dx.doi.org/10.1364/JOSAA.32.002227 JOAOD6 0740-3232 Google Scholar
R. Wu et al.,
“Freeform illumination design: a nonlinear boundary problem for the elliptic Monge-Ampére equation,”
Opt. Lett., 38
(2), 229
–231
(2013). http://dx.doi.org/10.1364/OL.38.000229 OPLEDP 0146-9592 Google Scholar
D. Michaelis, P. Schreiber and A. Bräuer,
“Cartesian oval representation of freeform optics in illumination systems,”
Opt. Lett., 36
(6), 918
–920
(2011). http://dx.doi.org/10.1364/OL.36.000918 OPLEDP 0146-9592 Google Scholar
F. R. Fournier, W. J. Cassarly and J. P. Rolland,
“Fast freeform reflector generation usingsource-target maps,”
Opt. Express, 18
(5), 5295
–5304
(2010). http://dx.doi.org/10.1364/OE.18.005295 OPEXFF 1094-4087 Google Scholar
V. Oliker,
“Mathematical aspects of design of beam shaping surfaces in geometrical optics,”
Trends in Nonlinear Analysis, 193
–224 Springer, Berlin, Heidelberg
(2003). Google Scholar
A. Bäuerle et al.,
“Algorithm for irradiance tailoring using multiple freeform optical surfaces,”
Opt. Express, 20
(13), 14477
–14485
(2012). http://dx.doi.org/10.1364/OE.20.014477 OPEXFF 1094-4087 Google Scholar
M. M. Sulman, J. Williams and R. D. Russell,
“An efficient approach for the numerical solution of the Monge-Ampère equation,”
Appl. Numer. Math., 61
(3), 298
–307
(2011). http://dx.doi.org/10.1016/j.apnum.2010.10.006 ANMAEL 0168-9274 Google Scholar
Z. Feng, B. D. Froese and R. Liang,
“Freeform illumination optics construction following an optimal transport map,”
Appl. Opt., 55
(16), 4301
–4306
(2016). http://dx.doi.org/10.1364/AO.55.004301 APOPAI 0003-6935 Google Scholar
B. D. Froese,
“A numerical method for the elliptic Monge-Ampère equation with transport boundary conditions,”
SIAM J. Sci. Comput., 34
(3), A1432
–A1459
(2012). http://dx.doi.org/10.1137/110822372 SJOCE3 1064-8275 Google Scholar
R. Wu et al.,
“Initial design with L2 Monge-Kantorovich theory for the Monge-Ampère equation method in freeform surface illumination design,”
Opt. Express, 22
(13), 16161
–16177
(2014). http://dx.doi.org/10.1364/OE.22.016161 OPEXFF 1094-4087 Google Scholar
Y. Brenier,
“Polar factorization and monotone rearrangement of vector-valued functions,”
Commun. Pure Appl. Math., 44
(4), 375
–417
(1991). http://dx.doi.org/10.1002/cpa.3160440402 Google Scholar
X. Feng and M. Neilan,
“Mixed finite element methods for the fully nonlinear Monge-Ampère equation based on the vanishing moment method,”
SIAM J. Numer. Anal., 47
(2), 1226
–1250
(2009). http://dx.doi.org/10.1137/070710378 Google Scholar
L. Lapidus and G. F. Pinder, Numerical Solution of Partial Differential Equations in Science and Engineering, John Wiley & Sons, Hoboken, New Jersey
(2011). Google Scholar
R. Swaminathan, S. K. Nayar and M. D. Grossberg,
“Framework for designing catadioptric projection and imaging systems,”
in Proc. of the IEEE Int. Workshop on Projector Camera Systems,
(2003). Google Scholar
L. Piegl and W. Tiller, The NURBS Book, 2nd ed.Springer Science & Business Media, New York
(1997). Google Scholar
Y. Luo et al.,
“Design of compact and smooth free-form optical system with uniform illuminance for led source,”
Opt. Express, 18 9055
–9063
(2010). http://dx.doi.org/10.1364/OE.18.009055 OPEXFF 1094-4087 Google Scholar
R. Wester et al.,
“Designing optical free-form surfaces for extended sources,”
Opt. Express, 22 A552
–A560
(2014). http://dx.doi.org/10.1364/OE.22.00A552 OPEXFF 1094-4087 Google Scholar
“Nichia NCSWE17A,”
(2017) http://www.nichia.co.jp/en/product/led_product_data.html?type=%27NCSWE17A%27 March ). 2017). Google Scholar
BiographyXiaolong Liu is a research assistant professor at the University of Tennessee, Knoxville, Tennessee, USA. He received his BS and MS degrees in electrical and computer engineering from Northeastern University, China, in 2008 and 2010, respectively, and his PhD in biomedical engineering from the University of Tennessee, Knoxville, Tennessee, USA, in 2015. His current research interests include surgical robotics, biomedical optics, and other surgical-related engineering designs. Reza Yazdanpanah Abdolmalaki received his BS degree in mechanical engineering from Amirkabir University of Technology, Tehran, Iran, in 2011 and his MS degree from the University of Tehran, Iran, 2014. Currently, he is pursuing his PhD in the Department of Mechanical, Biomedical and Aerospace Engineering (MABE) at the University of Tennessee, Knoxville, Tennessee, USA. His current research interests include surgical robotics, control systems, robotics, and engineering design. Gregory J. Mancini received his MD degree in general surgery from Mercer University School of Medicine in Macon, Georgia, in 2000. Currently, he is an associate professor of surgery at the University of Tennessee, Knoxville. His clinical practice and academic efforts focus on the area minimally invasive surgery. He is board certified in general surgery by the American Board of Surgery and is a fellow of the American College of Surgeons. Jindong Tan received his PhD in electrical and computer engineering from Michigan State University, East Lansing, Michigan, USA, in 2002. Currently, he is a professor and associate department head with the Department of Mechanical, Aerospace and Biomedical Engineering, The University of Tennessee, Knoxville, Tennessee, USA. His current research interests include mobile sensor networks, augmented reality and biomedical imaging, dietary assessment, and mobile manipulation. |