Open Access
1 March 2009 Registration of planar bioluminescence to magnetic resonance and x-ray computed tomography images as a platform for the development of bioluminescence tomography reconstruction algorithms
Bradley J. Beattie, Alexander D. Klose, Carl H. Le, Valerie A. Longo, Konstantine Dobrenkov, Jelena Vider, Jason A. Koutcher M.D., Ronald G. Blasberg
Author Affiliations +
Abstract
The procedures we propose make possible the mapping of two-dimensional (2-D) bioluminescence image (BLI) data onto a skin surface derived from a three-dimensional (3-D) anatomical modality [magnetic resonance (MR) or computed tomography (CT)] dataset. This mapping allows anatomical information to be incorporated into bioluminescence tomography (BLT) reconstruction procedures and, when applied using sources visible to both optical and anatomical modalities, can be used to evaluate the accuracy of those reconstructions. Our procedures, based on immobilization of the animal and a priori determined fixed projective transforms, should be more robust and accurate than previously described efforts, which rely on a poorly constrained retrospectively determined warping of the 3-D anatomical information. Experiments conducted to measure the accuracy of the proposed registration procedure found it to have a mean error of 0.36±0.23 mm. Additional experiments highlight some of the confounds that are often overlooked in the BLT reconstruction process, and for two of these confounds, simple corrections are proposed.

1.

Introduction

In vivo planar optical bioluminescence imaging (BLI) of small animals provides a high-sensitivity, low-background, noninvasive means of monitoring gene and protein expression and other cellular events at low cost.1, 2, 3, 4, 5, 6 However, the information that BLI provides is severely limited in terms of its ability to determine either the concentration or the precise location of the bioluminescence source. These limits stem from the manner in which light propagates through biological tissues.7 Unlike the high-energy x-ray and gamma-ray photons used in radiographic and nuclear imaging, photons at the wavelengths typical in BLI (400to800nm) do not predominantly travel in a straight line from their source to the detector. Instead, bioluminescent light is highly scattered and attenuated, processes that obscure and dissemble the location and intensity of the true source distribution. The region on the skin surface of the animal from which light is seen to emanate (in rough terms) is the surface point closest to the light source, and the magnitude of the light flux at the surface is heavily dependent on this distance.

Bioluminescence tomography (BLT) has the potential to remove these limitations, providing both quantitative accuracy and information regarding the precise location and 3-D distribution of the bioluminescence sources.8 Recovering this information from the surface flux measurements, however, is difficult with results that are sensitive to the chosen light propagation model and the assigned tissue parameters.9 Although it is known that the organs and tissues within an animal vary considerably in their light attenuating and scattering properties, BLT reconstruction algorithms often assume homogeneous tissue having composite attenuation and scatter parameters, this largely because knowledge of the internal anatomy is not available.

Precise knowledge of the shapes and locations of the major organs within an animal therefore has the potential to significantly improve the accuracy of BLT reconstructions. This knowledge could be garnered from, for example, magnetic resonance (MR) and/or x-ray computed tomography (CT) scans that have been spatially registered to the bioluminescence images. These anatomical datasets could be segmented according to tissue type and to each a different set of light propagation properties assigned.

To our knowledge, there have been just two previously published attempts to apply information regarding organ shapes and locations to assist in modeling the propagation of light through the tissues of a live mouse. The first of these estimated the shapes and positions of the major organs of the mouse using a generic segmented digital mouse atlas. This model was rotated, shifted, scaled, and warped so that its exterior surface contours matched those of a CT taken of the animal after it had been frozen with liquid nitrogen in the pose it was in within the BLI imager.10 The use of a generic mouse atlas to estimate the mouse anatomy does not allow for abnormal anatomy (e.g., tumors) within the mouse. This particular deficiency was addressed in a study by another investigator, which applied similar methodology but this time to an MR image of the same animal from which the BLI images were obtained.11 In spite of attempts to maintain the animal’s pose, it was again necessary to spatially warp the 3-D dataset to get the magnetic resonance imaging (MRI) surface contours to match (this time) those of a surface determined using photogrammetric techniques within the BLI imager. We believe warping in this manner is problematic because there are no measurements to guide the internal deformations of the organs and this can lead to significant errors in the light propagation estimate. Furthermore, the accuracy of this type of retrospective fitting procedure is data dependent, having potentially large errors when the contours are smooth.

In the approach we propose here, the animal is maintained in a fixed rigid pose across imaging sessions, and thus we avoid the need for warping transforms. By using specialized hardware that allows precise positioning of the animal within each of the scanners, we are able to use fixed a priori determined spatial transformations to register the image information among all modalities. The registration of the 3-D data (CT and MR) to the 2-D optical images is accomplished using a projective transformation that models the relative position, focal length, and field of view (FOV) of the camera within the BLI system. Corrections are also made for the spatial distortions introduced by the BLI camera lens. This model of the BLI camera system can be used to transfer information in both directions between the anatomical and optical imaging spaces. For example, it can be used to map the BLI image data onto a skin surface determined from the 3-D anatomical data. Likewise, the skin or other surfaces can be mapped to a 2-D image onto which the bioluminescence light signal can be superimposed.

Registration of the image spaces at this stage prior to BLT reconstruction allows for the use of information derived from the anatomical datasets regarding the location and spatial distribution of various organs to be used within the BLT reconstruction algorithm. BLT reconstructions based on this mapping are effectively preregistered to the anatomical data. This registration provides important anatomical context to assist in the interpretation of the reconstructed luminescent distribution. Perhaps more importantly, given the questionable accuracy of current BLT reconstruction algorithms in vivo, by using sources visible on both the optical and anatomical modalities, the MR (or CT) determined source distributions can be used as a gold standard against which the results of the BLT reconstructions can be assessed and validated.

In this manuscript, we describe procedures to register MR and CT image sets of a mouse to a set of optical bioluminescence planar images, allowing each animal’s own anatomy to define the spatial distribution of the attenuation and scatter parameters. By placing artificial light sources of known intensity within the animal that can be readily detected via CT and by using transgenic animals genetically engineered to have specific organs (visible on MR) express luciferase, we demonstrate a means by which the accuracy of a given BLT reconstruction can be assessed. In addition, by rotating a mouse while maintaining its fixed pose within the BLI imager, we are able to accurately determine the dependence of the measured light intensity on the angle of the surface normal relative to the BLI camera.

This is an extension of the work described in a previous paper12 in which we registered whole-body mouse images acquired with positron emission tomography (PET), single photon emission computed tomography (SPECT), CT, and MR. We believe that adding BLI to this list of small animal registration capabilities will prove instrumental in improving the accuracy of BLT reconstruction algorithms.

2.

Materials and Methods

2.1.

Scanners

Brief descriptions of the three imaging systems used for the studies described in this manuscript are as follows. The IVIS 200 is a bioluminescence and fluorescence imaging system utilizing a 26×26mm back-thinned, back-illuminated CCD, cryogenically cooled to 105°C . It has an adjustable field of view (FOV) ranging from 4to26cm and includes a light source and filter sets for fluorescence and multispectral bioluminescence imaging. The Bruker Biospec 4740 (Bruker Biospin, Inc., Karlsruhe, Germany) is a 4.7Tesla 40-cm horizontal-bore small-animal imaging spectrometer equipped for multinuclear imaging studies and spectroscopy. The Siemens/CTI microCAT II (Siemens Medical Solutions, Malvern, Pennsylvania) is a small-animal CT scanner with an 8.5-cm axial by 5.0-cm transaxial FOV. It uses a 2048×3096 element CCD array coupled to a high-resolution phosphor screen via a fiber-optic taper and a Tungsten anode with a 6-micron focal spot. Its highest reconstructed resolution is about 15-microns in each dimension.

2.2.

Overview of the Registration Procedure

The overall objective of our procedures is to base the registrations on a calibrated positioning of the animal within each scanner’s field of view. Between and during the imaging sessions, the animal is held in a rigid pose, at a fixed position relative to the animal bed. This is accomplished by wrapping the animal with a thin 0.01-mm polyethylene wrap while it is positioned atop a custom-designed bed with a nose cone for the administration of oxygen and gaseous anesthesia. The wrap applies a light pressure over the entire body of the animal, gently and efficiently restricting its-movement. Registration then amounts to establishing a frame of reference relative to the bed for each scanner and calculating the rigid or projective transforms that map between them.

In our studies, we have used several different bed designs, and many more are possible. Here, we will briefly describe one such bed that we feel is particularly apt for use in BLT reconstruction and is the one used in the animal experiments described later. The bed is fashioned from a 6×25cm rectangular sheet of 1-cm -thick Lucite at the center of which is cut a 4×15cm rectangular window. Over this window is stretched a single layer of 0.01mm polyethylene plastic, adhering to the Lucite with the assistance of a restickable glue (3M Glue Stick). This sheet of plastic forms the bed on top of which the animal is laid. The animal is then sandwiched and pressed by a second layer of polyethylene, effectively restraining its movement to less than 0.62mm (Ref. 12) and allowing equally clear views of the animal from above and below (see Fig. 1 ), as it is suspended above the window. Squeezing the animal in this manner has had no apparent adverse affect on the animal’s health in the dozens of studies conducted to date. At one end of the Lucite is attached a block of Delrin plastic into which are drilled a set of holes sized and spaced so as to mate with a corresponding set of pegs present on the bed mount adapters designed for each of the imaging modalities.

Fig. 1

Left, posterior, and anterior views of a mouse pressed and held in position by a plastic wrap on a custom bed within the IVIS imager. The posterior and anterior views show the skin surface equally wells both are typically used for BLT.

024045_1_064902jbo1.jpg

For the IVIS, the bed mount includes a platform referencing two of the inside edges of the IVIS’s light-tight box. Thus, the bed mount and the attached bed can be consistently placed within the IVIS, thereby allowing precisely reproducible positioning of the animal relative to the camera for any given camera to subject distance. The bed and its mounting system were designed such that the bed can be pivoted about its long axis (inferior to the superior axis of the mouse) in precisely calibrated 15-deg increments, allowing views of the animal from different vantage points.

The microCAT has a motorized bed positioning mechanism with an optically encoded position readout calibrated to a precision of 0.01mm and a repositioning accuracy of better than 0.1mm . A custom adapter is used to attach the animal bed to this bed positioning mechanism in a reproducible manner. It can then be removed for placement on the other scanners using specialized bed mounts designed for each. The coordinate system defined by the microCAT’s bed positioning mechanism was used as the reference frame to which both the Bruker and IVIS images are mapped.

Positioning of an animal within the field of view of the Bruker does not easily lend itself to such reproducibility because its field of view is located deep within its bore and thus is remote from any potential spatial reference. Moreover, references within the bore are generally blocked by the gradient and readout coils. Therefore, we established a set of markers within the bed that are visible on both MR and CT. Using landmarks derived from these markers, we can place the MR image set into the microCAT’s frame of reference. Alternatively, retrospective mutual information-based volume registration methods work well when registering these two structural image datasets to one another. For a detailed description of the markers, the volume and landmark point-based registration procedures, and the effectiveness of the wrapping system in maintaining the rigidity of the animal, see Beattie 12

2.3.

Registration of a 2-D Image to a 3-D Image Set

The conventional notion of what it means to register two three-dimensional (3-D) image sets is to rotate and shift a target image set so that its resampled voxels are in locations equivalent to those of the corresponding voxels within the reference image set. For the purposes of this manuscript, we want to co-register a 3-D image set with an image that has only two dimensions. Furthermore, this two-dimensional (2-D) image is generated from the summation of photons traveling along vectors entering the camera, and thus its pixels do not correspond to points in 3-D space (as opposed to the pixels of a 2-D slice through 3-D space). In this case, the conventional notion of 3-D image registration is ill applied, so instead we will make use of a paradigm more apt for the registration of 2-D photographic images.

In this paradigm, two 2-D images are co-registered when through a series of transformations applied to the target image, the position, orientation, focal length, and distortions of the reference image’s camera system are mimicked. In this way, the vectors associated with each pixel in the two images are made to overlay. Thus, for our purposes, we will create a virtual camera capable of taking 2-D images of the 3-D image set information content. This virtual camera is simulated to have the same focal length and distortions and to be in the same position and orientation relative to the imaged object as the real 2-D camera that acquired the reference (in this case, bioluminescence) images. This virtual camera system can be made to visualize the 3-D image set in a variety of ways—for example, it can slice through the 3-D image set at an arbitrary depth and angle, or it can view maximum intensity projection information, or it can view the reflectance of virtual light sources off surfaces that have been segmented from the 3-D data.

2.4.

BLI Camera Model

The camera model we used was that of a basic pinhole camera as described by Hartley and Zisserman.13 In this model, points in 3-D space represented in homogenous coordinates (X,Y,Z,T)T are mapped onto the 2-D image plane by the 3×4 projective transformation matrix, which is decomposed and parameterized as follows:

[f0pu0fpv001][Rxyz][100cx010cy001cz][cosβ0sinβ00100sinβ0cosβ00001][Qxyz][100tx0100001tz0001].
Here, Rxyz and Qxyz are rotation matrices having three parameters each. Point (pu,pv) is the center of the acquired 2-D image, (cx,cy,cz) is the camera center, and β describes the rotation of the bed about its axis. Vector (tx,0,tz) defines the translation that, when combined with Qxyz , moves the bed from its position in the CT or MR coordinate system onto the axis of the bed mount in the BLI coordinate system. Altogether, this system requires 15 parameters ( f , px , py , 3 for R , cx , cy , cz , β , 3 for Q , tx , and tz ), three of which were fixed ( px , py , and the rotation angle β ) leaving 12 parameters to be fit during the calibration procedure.

Distortion within the IVIS camera images was modeled using the radial distortion model described by Hartley and Zisserman.13 In this model, the distortion is assumed to be solely a function of the radial distance from some central point and is estimated by a Taylor expansion, L(r)=1+k1r+k2r2+k3r3+ , with r2=(xxc)2+(yyc)2 and where (xc,yc) is the central point. In our implementation, we have assumed that this central point corresponds to the principal point and image center (px,py) . An image of a grid was used to calculate three terms of the Taylor expansion of the radial distortion function by minimizing the distance between the gridline intersections and corresponding virtual lines formed by end points of each gridline on the image periphery (as suggested by Hartley and Zisserman).

2.5.

BLI Camera Calibration

In order to cross-calibrate the IVIS and microCAT coordinate systems, a phantom for which corresponding points can be identified on both imaging systems was devised. Specifically, this phantom was made of a 3×2×10cm plastic block into which a grid of 1mm wide by 1mm deep grooves were cut, spaced 5mm apart. The grooves themselves were painted white, while the tile surfaces of the block were painted black. Grids were cut into all six surfaces of the block, although only three surfaces were used in the calibration procedures described here.

The corners of the tiles are readily identified on both the reconstructed CT image sets and in the reflectance images from the IVIS. Within the microCAT image sets, the 3-D coordinates of 28 tile corners were manually identified. These points were arranged in grids of 3×4 points covering the top and the two adjacent long sides of the phantom. Note that the grids all extend to the edges of the phantom; therefore, the top shares its leftmost and rightmost columns of points (4 points each) with each of the respective sides (thus, 3×3×444=28 points altogether).

These same tile corners were identified (again manually) in each of the 13 IVIS distortion-corrected reflectance images in which the corners could be seen. Thus at 90deg , 0deg , and +90deg , a single face and therefore 12 points were in view, and at each of the other 10 angles, two faces and therefore 20 points were in view. All combined, 236 points within the 13 IVIS images were located. These same locations were modeled based on the 28 microCAT points and the known bed rotation angles. The 12 variable model parameters were adjusted to achieve a least-squared error between the measured and modeled point locations using a constrained nonlinear fitting procedure (Isqnonlin in MATLAB, The Mathworks, Inc., Natick, Massachusetts).

2.6.

Registration Accuracy

The accuracy of the registration was estimated by performing a number of repeat studies involving a mouselike phantom. The phantom was mouselike in terms of its size, weight, and rough shape so that the forces applied to the enclosing plastic wrap and bed support structure would be similar to those encountered with an actual animal. On the surface of this mouse phantom was glued four gaseous tritium light source (GTLS) beads (mb-microtec ag, Niederwangen, Switzerland), two on each of what were effectively the anterior and posterior surfaces. These small (2.3×0.9mm) cylindrical glass tubes emit a small, virtually constant (tritium-powered fluorescence) level of light and are readily distinguishable on both the CT and bioluminescence images.

Between each of the repeat studies, the bed was removed from its mount and the mounting platform removed from the IVIS; thus, the measured accuracy takes into consideration the reproducibility of these bed positioning procedures. (Note that errors due to the movement of the wrapped mouse were considered in our previous paper.12) The bed repositioning was performed three times, each time with images taken of the bed rotated at angles covering a full 360deg at 30-deg increments. Following each bioluminescence imaging session, the GTLS phantom was imaged on the CT, each of which also involved the removal of the bed and its mount from the CT.

On each of the three CT datasets, the centers of the four GTLS beads were manually identified (i.e., with a computer cursor). To these, we applied the perspective and distortion transforms calculated in the calibration procedures, generating a set of 2-D locations within the bioluminescence image space. The corresponding locations as seen on the bioluminescence images were also identified manually, and the absolute distance between corresponding transformed CT and bioluminescence point pairs determined. This was done for the nine combinations of the bioluminescence and CT repositioning studies, each involving 12 pairings (one for each angle) of each of the two GTLS beads visible at a given angle.

The mean and standard deviations and max of the errors were calculated for each bead location at each angle so that unusually large errors for a given bead (i.e., location within the image) or for a given angle could be identified. Failing the identification of any outliers, the results were summarized by a single mean, standard deviation and overall max error.

2.7.

Animals and Imaging Procedures

Numerous animal studies have been undertaken utilizing these registration procedures. Here, we will describe three preparatory studies (described here as experiments 1, 2, and 3) whose results have general application and implications across all BLT reconstruction algorithms. In the first of these, experiment 1, we measure the dependence of the measured surface flux on the angle of the skin surface relative to the camera. In experiment 2, we provide direct evidence of the impact of inhomogeneous light propagation within the tissues of the mouse, and in experiment 3, we demonstrate a method of correcting for the time-dependent changes in total light flux seen in typical luciferase-based bioluminescence imaging studies due to substrate transport and consumption. Accounting for this time-course is important in multispectral and multiview BLT studies,14, 15, 16, 17 which involve multiple sequential images. The results of our investigations of specific BLT reconstruction approaches we will reserve for a future manuscript.

In experiments 1 and 2, the light source was a GTLS bead placed within a small catheter that was in turn placed within the rectum of a nude mouse (nunu) . Prior to this placement, anesthesia was induced with 3% isoflurane and the eyes of the mouse were dabbed with a sterile ocular lubricant (Pharmaderm—Paralube Vet Ointment) to prevent drying. The mouse was then placed on the bed and secured with the plastic wrap, following which it was imaged within the IVIS imager and on the microCAT CT scanner. The details of each of these imaging sessions are provided later. Continuously throughout and between all imaging studies, the mouse was maintained under anesthesia using 1% isoflurane, with only momentary disconnects to allow transport between imaging systems.

In experiment 3, we used a transgenic mouse that was genetically engineered such that both of its kidneys uniformly expressed click-beetle red luciferase (see Fig. 2 ). We believe animals of this type will prove to be an effective means of testing the accuracy of BLT reconstruction algorithms for distributed (i.e., non-point-like) sources in vivo because the source distribution is more predictable across animals (compared to implanted, luciferase-expressing tumors, for example) and because the organs expressing the luciferase are readily seen on MR. Our use of this animal for the purposes of this manuscript, however, is to demonstrate a means to correct for the time-course of bioluminescence light output following the luciferin injection.

Fig. 2

Photograph (in gray scale) of a transgenic mouse onto which the bioluminescence image (in hot-iron color scale) has been superimposed. The abdomen of the animal has been opened surgically and some organs removed to provide a clear view of its kidneys. These images, taken immediately post mortem and following a luciferin injection, demonstrate the strong and equal CBR luciferase expression in the kidneys of this animal.

024045_1_064902jbo2.jpg

Unlike the nude mice used in the first two experiments, the transgenic mouse has dark brown fur that can interfere with the measurement of the bioluminescence signal. To avoid this interference, the abdomen of the animal was depilated prior to imaging. This animal received a bioluminescence image set followed by scans on the microCAT CT and Bruker MR. Details of the imaging protocols used on this animal are described later.

2.7.1.

IVIS imaging protocol

Imaging on the IVIS varied somewhat with the experiment. For the angular-dependence measurement, both reflectance and bioluminescence mode images were acquired for each angular position of the bed as it was rotated in 15-deg increments between ±90deg . All bioluminescence images were acquired in the “open” filter setting (i.e., with no filter present).

In experiment 2, demonstrating the effect of tissue heterogeneity, the animal was imaged from above in a prone position. Several attempts were made with slight adjustments to the position of the GTLS bead until the bioluminescence image achieved a bimodal surface flux suggesting preferential light pathways to either side of the spine. Upon achieving this position, the animal was imaged using the full set of 20-nm bandpass filters available on the IVIS 200, covering the range from 560to660nm .

For experiment 3, image sets of the mouse were taken from both the anterior and posterior views. Each image set consisted of a reflectance image followed by the full set of 20-nm bandpass filter images. Bracketing and interposed between each of these, a short (10s) “open” filter setting image was acquired. This entire imaging sequence commenced two minutes following an intraperitoneal injection of luciferin ( 150mgkg in 100μL ).

2.7.2.

MicroCAT imaging protocol

Three hundred and sixty transmission images were acquired at 1-deg increments encircling the mouse. These images were reconstructed with a cone-beam 3-D filtered back projection algorithm (COBRA software from the Exxim Computing Corp., Pleasanton, California) into a 192×192×384 matrix over a 4.38×4.38×8.76cm FOV (i.e., 0.228×0.228×0.228mm voxels).

2.7.3.

Bruker MR imaging protocol

Images were acquired using a 7-cm Bruker birdcage coil tuned to 200.1MHz and the 10mTm gradient coil system. Three-dimensional images were obtained with a fast spin-echo sequence with a repetition interval (TR)=1.2s , effective echo time (TE)=40ms , image matrix of 128×96×256 , 8 repetitions per phase-encoding step, and a total imaging time of 61min . The final voxel dimensions for these images are 0.341×0.333×0.333mm .

3.

Results

3.1.

Registration Accuracy

The results of the test of the BLI to CT registration accuracy showed no bias in the error, neither for the angle of rotation nor for the bead location (see Table 1 ). The mean error across all beads was 0.36mm with a standard deviation of 0.23mm . The maximum error encountered over all measurements was 1.08mm .

Table 1

The mean error in the BLI to CT registration for individual beads as the bed is rotated.

Angle (deg.) −165 −135 −105 −75 −45 −15 154575105135165
Bead 1a or 1b avg (mm)0.560.170.350.400.310.520.250.290.250.240.120.24
Bead 2a or 2b avg (mm)0.440.330.530.440.550.310.390.400.360.380.480.42

3.2.

Experiment 1—Angular Dependence

By the time a given photon reaches the inner surface of an animal’s skin, it has usually undergone numerous scattering events such that virtually all information regarding the direction of its source has been lost. Moreover, for this reason, photons impinge on the inner surface nearly isotropically. However, because of the change in refractive index when moving from skin to air, the exit angle is not isotropic, and thus the apparent intensity of the light emanating from a given surface point is dependent on the angle between the skin surface normal and the camera line of sight. For BLT, this dependence needs to be accounted for or corrected when determining the surface flux at each surface point.

Figure 3 shows our measurements made in experiment 1 of relative light intensity as a function of the exit angle fitted with a curve modeled based on the Snell and Fresnel equations and assuming a refractive index of 1.4 (the value measured in mammalian tissues by Bolin 18). The derivation of this model is described in the appendix. In measuring the angular dependence, we chose a set of surface points based on a threshold applied to the bioluminescence image taken at bead angle zero. The surface normal for these points was determined from the CT image, and the angle relative to the camera was garnered from the rotations to be applied to the CT data in registering the bioluminescence and CT data sets. Given the somewhat flattened body contour of the mouse [see Fig. 4a ], the selected surface point normal vectors were all within 5deg of one another and therefore considered to correspond to a single mean angle. (Note that this mean angle was not zero for the horizontal bed position, since the surface of the mouse was at a slight angle relative to the bed.) This same set of surface points was followed as the bed was rotated to different positions between ±75deg in 15deg increments. The intensities of the bioluminescence light at these surface points were averaged at each angle and associated with the bed angle plus the mouse-to-bed angular offset. The intensities were then normalized to have unit maximum amplitude.

Fig. 3

Measurements (xs) of the relative light intensity as the angle between the surface normal and the camera line of sight varies between (approximately) ±75deg . The solid line shows the light fall-off predicted by the proposed model, assuming a refractive index (r.i.) for biological tissues equal to 1.4 and a mouse imaged in air (r.i.=1.0) .

024045_1_064902jbo3.jpg

Fig. 4

(a) Top two images showing transaxial and sagittal images slicing through the GTLS bead (highlighted by the red arrow). The orientation of the posterior axis is shown by the yellow arrow; thus the posterior surface of the animal is above and to the right in the transaxial and sagittal images, respectively. (b) The bottom three images show the light emanating from the bead that reaches the posterior surface of the mouse (in color) superimposed on a reflectance light photograph and on images of the skin and bone surfaces rendered from the registered CT data.

024045_1_064902jbo4.jpg

3.3.

Experiment 2—Tissue Heterogeneity

Figures 4a and 4b show a set of images taken in experiment 2. Based on the skin surface contour as seen on the CT [in Fig. 4a], one would not expect the bimodal surface flux seen in the bioluminescence images [Fig. 4b] if the underlying tissue was homogeneous in its light propagation properties. The CT image shows that the GTLS bead is positioned directly beneath the spine in this animal, suggesting that increased attenuation through the bone may explain the bimodal distribution.

3.4.

Experiment 3—Time-Course Correction

As described in Sec. 2, our in vivo multispectral bioluminescence imaging protocol includes “open” (i.e., unfiltered) acquisitions bracketing each of the images acquired with one of the 20-nm bandpass filters. The intent of these repeat measures is to monitor the time-course of luciferase enzyme-substrate activity (and perhaps other physiological changes) leading to changes in the measured surface flux. Note that here we are assuming that there are no changes in the spatial distribution of the enzyme or substrate nor changes affecting the spectrum of the light emissions (e.g., temperature-induced red-shift19).

The procedure to correct for the enzyme-activity time-course is as follows. To each of the “open” images, the same region of interest enveloping the bulk of the light emanating from the mouse was applied, and the time of acquisition and total light flux (in photons per second per steradian) was recorded. The unfiltered light flux at the time of the filtered image acquisition was estimated by linear interpolation of the bracketing unfiltered light flux measurements. The time-course correction factor for each filtered image taken from a given viewpoint (anterior or posterior) is simply the ratio of the interpolated flux relative to flux of the first open image from that viewpoint. The changes in the time-course between views was determined by extrapolating the correction factors from the first view to the time of the first open acquisition of the second view. All of the second viewpoint correction factors are then scaled by this extrapolated factor. A plot of the resulting correction factors is shown in Fig. 5 .

Fig. 5

Plot of the factors to be used in compensating for the changes in the luciferase enzymatic activity as a function of time. The diamond-shaped markers show the corrections to be applied to the posterior view data, and the square-shaped markers describe the correction for the anterior view.

024045_1_064902jbo5.jpg

4.

Discussion

The majority of bioluminescence tomography reconstruction algorithms, when tested in vivo, lack a gold-standard reference describing the true light-source distribution. This lack of in vivo testing and validation has hampered both the continued refinement and the acceptance of BLT for routine use. The registration procedures we propose enable the development of a gold-standard reference to which the reconstructed luminescence source distribution can be compared. This reference comes in the form of a signal measurable on CT or MR that is known or is reasonably assumed to correlate with the true source distribution. In this manuscript, we have provided two examples of such a gold-standard reference, GTLS beads and the organs of a transgenic mouse.

Although it is as yet an open question, further improvements in the accuracy and robustness of BLT reconstructions may require the incorporation of additional a priori information. In particular, the spatial distributions of tissues having differing light propagation properties may have significant impact. CT/MR to BLI registration allows a mechanism via which this type of information may be exploited in BLT. By using the animal’s own MR or CT, even abnormal anatomies may be handled.

Last, we demonstrate corrections for two confounds that play a role in many BLT acquisitions, the changes in bioluminescence flux as a function of time and as a function of angle between the surface normal and camera line of sight. In correcting for this latter confound, we propose a model that relates the distribution of light propagation vectors for photons impinging on the inner surface of the skin to those exiting the skin surface. This model was tested using careful measurements made possible by the described registration procedures.

It is our hope and expectation that taken together these pieces form a platform upon which bioluminescence tomography reconstruction algorithms may be improved and refined and ultimately validated, paving the way for routine preclinical use.

Appendices

Appendix

The model we propose for describing the falloff in light intensity as the angle between the camera line of sight and the surface normal increases is derived from Snell’s law and the Fresnel equations and assumes that photons just prior to exiting the animal are isotropic. Thus, the incident angle θ1 (see Fig. 6 ) is uniformly distributed over ±90deg . When these photons are moving from the animal (with refractive index n1 ) into air (with refractive index n2<n1 ), if they are incident at an angle θ1 greater than θcrit=sin1(n2n1) , then they are internally reflected, whereas if they are incident at an angle less than θcrit , their exit angles θ2 are distributed over the range ±90deg . This distribution, however, is not uniform. Instead, for each arbitrarily small solid angle dθ1 , there is a corresponding (larger) solid angle dθ2 into which the photons are distributed. The ratio of these solid angles dθ1dθ2 determines the reduction in light flux and can easily be calculated by solving Snell’s law of refraction formula [Eq. 1] for θ1 and taking its derivative with respect to θ2 [result shown in Eq. 2]:

Eq. 1

n1sin(θ1)=n2sin(θ2),

Eq. 2

dθ1dθ2=n2cos(θ2)n1[1n22/n12sin2(θ2)]12.

Fig. 6

Diagram defining parameters θ1 , θ2 , n1 , and n2 .

024045_1_064902jbo6.jpg

This relationship is modified slightly by the partial reflections occurring for incident angles less than θcrit described by the Fresnel equations. The Fresnel equation describing the fraction of light transmitted as a function of the incident (or exit) angle is shown in Eq. 3. The complete description of the angular distribution of the exiting photons for isotropic incident photons is the product of Eqs. 2, 3, Tdθ1dθ2 :

Eq. 3

T=1[sin(θ2θ1)/sin(θ2+θ1)]2+[tan(θ2θ1)/tan(θ2+θ1)]22.

We tested this model on a phantom consisting of a large Delrin plastic block 10×10×4cm . In the center of one of the 10×10cm sides, was drilled a 2-cm -deep cylindrical hole with a diameter of 0.5cm . This in turn was filled by a snugly fitting cylindrical peg, also made of Delrin. Into the tip of the peg, a small hole was excavated, just large enough to accommodate a GTLS bead. The GTLS bead, so placed, was positioned in the center of the large Delrin block. The block, in turn, was placed on the bed mount within the IVIS imager, and luminescent images were acquired with the block rotated at a series of angles between ±75deg (at 15-deg increments). Delrin is known to have a refractive index of about 1.48 (Ref. 20), and this value worked well when fitting our model to the averaged surface flux (see Fig. 7 ).

Fig. 7

A plot similar to that of Fig. 3, except here describing the light emanating from a point source placed 2cm deep within a Delrin plastic block. The solid line in this case is the modeled function using the known refractive index for Delrin.

024045_1_064902jbo7.jpg

References

1. 

C. H. Contag and M. H. Bachmann, “Advances in in vivo bioluminescence imaging of gene expression,” Annu. Rev. Biomed. Eng., 4 235 –260 (2002). https://doi.org/10.1146/annurev.bioeng.4.111901.093336 1523-9829 Google Scholar

2. 

C. P. Klerk, R. M. Overmeer, T. M. Niers, H. Versteeg, D. J. Richel, T. Buckle, C. J. Van Noorden, and O. van Tellingen, “Validity of bioluminescence measurements for noninvasive in vivo imaging of tumor load in small animals,” BioTechniques, 43 (S1), S7 –S13, S30 (2007). https://doi.org/10.2144/000112515 0736-6205 Google Scholar

3. 

K. E. Luker and G. D. Luker, “Applications of bioluminescence imaging to antiviral research and therapy: multiple luciferase enzymes and quantitation,” Antiviral Res., 78 179 –187 (2008). 0166-3542 Google Scholar

4. 

A. Sato, B. Klaunberg, and R. Tolwani, “In vivo bioluminescence imaging,” Comp. Med. East West, 54 631 –634 (2004). 0147-2917 Google Scholar

5. 

A. Soling and N. G. Rainov, “Bioluminescence imaging in vivo—application to cancer research,” Expert Opin. Biol. Ther., 3 1163 –1172 (2003). 1471-2598 Google Scholar

6. 

R. G. Blasberg, “In vivo molecular-genetic imaging: multi-modality nuclear and optical combinations,” Nucl. Med. Biol., 30 879 –888 (2003). https://doi.org/10.1016/S0969-8051(03)00115-X 0969-8051 Google Scholar

7. 

B. W. Rice, M. D. Cable, and M. B. Nelson, “In vivo imaging of light-emitting probes,” J. Biomed. Opt., 6 432 –440 (2001). https://doi.org/10.1117/1.1413210 1083-3668 Google Scholar

8. 

G. Wang, W. X. Cong, H. O. Shen, X. Qian, M. Henry, and Y. Wang, “Overview of bioluminescence tomography—a new molecular imaging modality,” Front. Biosci., 13 1281 –1293 (2008). https://doi.org/10.2741/2761 1093-4715 Google Scholar

9. 

A. D. Klose and A. H. Hielscher, “Optical tomography with the equation of radiative transfer,” Int. J. Numer. Methods Heat Fluid Flow, 18 443 –464 (2008). https://doi.org/10.1108/09615530810853673 0961-5539 Google Scholar

10. 

G. Wang, W. X. Cong, K. Durairaj, X. Qian, H. Shen, P. Sinn, E. Hoffman, G. McLennan, and M. Henry, “In vivo mouse studies with bioluminescence tomography,” Opt. Express, 14 7801 –7809 (2006). https://doi.org/10.1364/OE.14.007801 1094-4087 Google Scholar

11. 

M. Allard, D. Cote, L. Davidson, J. Dazai, and R. M. Henkelman, “Combined magnetic resonance and bioluminescence imaging of live mice,” J. Biomed. Opt., 12 034018 (2007). https://doi.org/10.1117/1.2745298 1083-3668 Google Scholar

12. 

B. J. Beattie, G. J. Forster, R. Govantes, C. H. Le, V. A. Longo, P. B. Zanzonico, L. Bidaut, R. G. Blasberg, and J. A. Koutcher, “Multimodality registration without a dedicated multimodality scanner,” Mol. Imaging, 6 108 –120 (2007). 1535-3508 Google Scholar

13. 

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, (2004) Google Scholar

14. 

H. Dehghani, S. C. Davis, and B. W. Pogue, “Spectrally resolved bioluminescence tomography using the reciprocity approach,” Med. Phys., 35 4863 –4871 (2008). https://doi.org/10.1118/1.2982138 0094-2405 Google Scholar

15. 

H. Dehghani, S. C. Davis, S. D. Jiang, B. W. Pogue, K. D. Paulsen, and M. S. Patterson, “Spectrally resolved bioluminescence optical tomography,” Opt. Lett., 31 365 –367 (2006). https://doi.org/10.1364/OL.31.000365 0146-9592 Google Scholar

16. 

A. J. Chaudhari, F. Darvas, J. R. Bading, R. A. Moats, P. S. Conti, D. J. Smith, S. R. Cherry, and R. M. Leahy, “Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging,” Phys. Med. Biol., 50 5421 –5441 (2005). https://doi.org/10.1088/0031-9155/50/23/001 0031-9155 Google Scholar

17. 

W. M. Han and G. Wang, “Theoretical and numerical analysis on multispectral bioluminescence tomography,” IMA J. Appl. Math., 72 67 –85 (2007). https://doi.org/10.1093/imamat/hxl031 0272-4960 Google Scholar

18. 

F. P. Bolin, L. E. Preuss, R. C. Taylor, and R. J. Ference, “Refractive index of some mammalian tissues using a fiber optic cladding method,” Appl. Opt., 28 2297 –2303 (1989). https://doi.org/10.1364/AO.28.002297 0003-6935 Google Scholar

19. 

H. Zhao, “Emission spectra of bioluminescent reporters and interaction with mammalian tissue determine the sensitivity of detection in vivo,” J. Biomed. Opt., 10 041210-1 –041210-9 (2005). 1083-3668 Google Scholar

20. 

L. Nardo, A. Brega, M. Bondani, and A. Andreoni, “Non-tissue-like features in the time-of-flight distributions of plastic tissue phantoms,” Appl. Opt., 47 2477 –2485 (2008). https://doi.org/10.1364/AO.47.002477 0003-6935 Google Scholar
©(2009) Society of Photo-Optical Instrumentation Engineers (SPIE)
Bradley J. Beattie, Alexander D. Klose, Carl H. Le, Valerie A. Longo, Konstantine Dobrenkov, Jelena Vider, Jason A. Koutcher M.D., and Ronald G. Blasberg "Registration of planar bioluminescence to magnetic resonance and x-ray computed tomography images as a platform for the development of bioluminescence tomography reconstruction algorithms," Journal of Biomedical Optics 14(2), 024045 (1 March 2009). https://doi.org/10.1117/1.3120495
Published: 1 March 2009
Lens.org Logo
CITATIONS
Cited by 20 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Bioluminescence

Cameras

3D image processing

Computed tomography

Image registration

Imaging systems

Magnetic resonance imaging

Back to Top