Open Access Paper
17 October 2022 Motion compensated weighted filtered backprojection considering rebinning process
Nora Steinich, Johan Sunnegårdh, Harald Schöndube
Author Affiliations +
Proceedings Volume 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography; 123042U (2022) https://doi.org/10.1117/12.2646567
Event: Seventh International Conference on Image Formation in X-Ray Computed Tomography (ICIFXCT 2022), 2022, Baltimore, United States
Abstract
Motion during CT scans causes artifacts that can severely degrade the image quality. We propose a motion compensation algorithm that can be combined with reconstruction algorithms that contain a rebinning step like the weighted filtered backprojection algorithm. Therefore, we assume that the motion present during acquisition is known, and we extend the backprojection step in the reconstruction to consider this motion, reducing the motion artifacts in the resulting images. Furthermore, we propose a combination of our motion compensation algorithm with two versions of an iterative weighted filtered backprojection algorithm.

1.

INTRODUCTION

Motion during image acquisition in a CT scan causes artifacts that can severely degrade the image quality and lower the diagnostic value of the reconstructed images. The motion leads to an inconsistency in the acquired data and hence to artifacts like blurring, streaking or ghost images. The motion is often caused by patient movement, however it can also originate in motion of the scanner itself, for example in mobile scanner systems like a moving or sliding gantry. Since a complete prevention of motion during the scan can be difficult, a solution is required that compensates the present motion and thus improves the quality of the resulting images.

There are many algorithms proposed in the literature that attempt motion compensation in CT reconstruction. One method worth mentioning is the one by Hahn et al. [1], based on partial angle reconstruction. Another interesting motion compensation method is proposed by Bhagalia et al. [2].

In this paper we focus on cases where the motion present during acquisition, or an approximation thereof, is already known. The motion can either be measured with sensors or estimated using a motion estimation method like the one proposed by Bruder et al. [3]. A commonly used method for motion compensation in CT reconstruction with known motion is the method described by Schäfer et al. [4]. Their motion correction algorithm is applied in the reconstruction process during the backprojection step. Every voxel of the voxel volume that should be reconstructed is virtually shifted according to the motion present at the moment of acquisition. They included their method into the reconstruction algorithm of Feldkamp, Davis, and Kress [5] and achieved very good results.

However, there are several popular reconstruction algorithms, like the weighted filtered backprojection (WFBP) algorithm by Stierstorfer et al. [6] which rely on a rebinning step that, when combined with the method of Schäfer et al., will cause artifacts remaining in the final result. In the following we propose an extension of the algorithm by Schäfer et al. that can also handle reconstruction algorithms containing a rebinning method. Furthermore, we propose the combination of the described motion compensation algorithm with an iterative reconstruction method.

2.

METHODS

2.1

Motion Compensated Weighted Filtered Backprojection

In this paper we combine the motion compensation method proposed by Schäfer et al. [4] with the WFBP algorithm of Stierstorfer et al. [6], a 3D filtered backprojection algorithm for multislice spiral CT. However, our proposed motion compensation method works with any CT reconstruction algorithm using rebinning and backprojection steps.

In the rebinning step of the WFBP algorithm the projection images acquired in the cone beam geometry are virtually rearranged to form semi-parallel projection images better suited for the backprojection step. The cone beam geometry is described using α as the rotation angle, β as the horizontal opening angle of a ray and q as the cone parameter describing the vertical cone angle of a ray. For the semi-parallel projection images the rotation angle is described as θ and (p, q) denote the detector columns and rows, respectively. The geometry is only semi-parallel, as the rays are only parallel along the detector rows, but not along the columns. In the rebinning step the rays of the initially obtained cone beam projection images P(α, β, q) are rearranged to form semi-parallel projection images P(θ, p, q) using the rebinning formulas

00103_PSISDG12304_123042U_page_2_1.jpg

with R describing the distance from source to isocenter.

The backprojection step in the WFBP algorithm iterates through all rebinned and filtered projection images P(θ, p, q) and for each projection image it iterates through all voxels x of the voxel volume to be reconstructed. For each voxel x, the corresponding position (p, q) in the projection image P(θ, p, q) the voxel has been projected to during acquisition is calculated. The value at position (p, q) is then added to voxel x in the voxel volume. This is performed for all voxels and all projection images until the entire voxel volume has been reconstructed.

For our motion compensation method we assume that the motion present during acquisition is known. For every projection image P(α, β, q) a motion state Mα is required. Each motion state Mα holds the information about the position of the scanned object relative to the scanner at the moment of the acquisition of P(α, β, q).

Our proposed algorithm is based on the method by Schäfer et al. [4], extending it to work with reconstruction algorithms containing a rebinning step like the WFBP algorithm. When combining the WFBP algorithm with Schäfer’s method the backprojection step of the reconstruction algorithm is extended by moving every voxel x in the voxel volume to be reconstructed according to the known motion pattern. As described above, the backprojection is performed by iterating for each projection image P(θ, p, q) through all voxels x. Knowing the motion state Mα present during the acquisition of the projection image P(α, β, q) obtained at rotation angle α, we can use this information to move each voxel x accordingly, assuming α = θ. For each projection image P(θ, p, q) each voxel position x is recalculated to xM,α considering the motion state Mα=θ. Using the new position xM,α the corresponding position (p, q) in the projection image P(θ, p, q) is calculated and the value of this position is added to the value at voxel position x. This is then repeated for all voxels in the voxel volume and for all obtained projection images. This leads to a more accurate reconstruction, as the motion present during acquisition is considered. However, when combining the WFBP algorithm with the motion compensation method proposed by Schäfer et al. in this manner, some motion artifacts remain in the final result, as the rebinning process is not considered.

Due to the rebinning the assumption that for every projection image P(θ, p, q) one motion state Mα needs to be considered during backprojection is not accurate. This assumption only holds for the projection images P(α, β, q). The rebinning process however rearranges the rays such that the rebinned projection images each consist of information obtained at different time points and hence depicting different motion states. The available motion state Mα is no longer correct for the entire projection image P(θ, p, q) but only for the column where p = 0, as only there θ = α holds. Using our geometry, the columns right of the column p = 0 contain previous motion states and columns towards the left contain subsequent motion states. Hence using a motion state Mα for an entire projection image P(θ, p, q) in the backprojection will lead to motion artifacts remaining in the final result.

We propose a motion compensation method as described in algorithm 1 extending Schäfer’s method to consider the rebinning process, enabling a combination with the WFBP reconstruction algorithm. Steps 1) and 2) are the steps as described in the method of Schäfer et al., while steps 3)-6) are our proposed extension. Our method uses the motion state Mα=θ to calculate a new position xM,θ of the voxel x, which is then used to calculate an initial position (p, q) in the projection image P(θ, p, q) just as described by Schäfer et al. This position (p, q) is then used to choose a more accurate motion state M. Using the column position p and the rebinning formulas described in equation (1) the cone beam projection image P(α, β, q) from which the information in this column p was taken in the rebinning and hence which motion state M was present can be calculated. With this new more accurate motion state M the voxel position x can be recalculated to xM,θ, followed by the calculation of the new position (p’, q’) in the projection image P(θ, p, q). This more accurate position (p’, q’) can then be used to write its values to the voxel x. The steps 3) to 5) can be repeated to achieve even more accuracy, however experiments have shown that even one iteration is able to yield good results.

00103_PSISDG12304_123042U_page_2_2.jpg

For a more efficient implementation, we propose a modification of the algorithm stated above. Instead of using the exact motion state M that was calculated for every position p in the projection image P(θ, p, q), we suggest choosing 3 motion states per projection image, whilst interpolating the motion states in between. Specifically, we chose to use the motion state at position p = 0 which corresponds to the initial motion state Mα=θ as well as one motion state for one column left and one column right of p = 0. However, we are not choosing the left and rightmost columns in the projection image P(θ, p, q), but the columns furthest to the left and right that are actually reached in a projection. These columns need to be calculated individually, as they depend on several parameters. For the three chosen columns the corresponding motion states are calculated using the rebinning equations (1). For a position (p, q) in the projection image P(θ, p, q) the corresponding motion state is then obtained from the motion states of the two closest chosen columns using linear interpolation of the motion values.

2.2

Motion Compensated Iterative Weighted Filtered Backprojection

The method described above can also be combined with an iterative weighted filtered backprojection algorithm, as the one described by Sunnegårdh et al. [7]. The iterative reconstruction algorithm can be described with the vectors p and f denoting the projection data and the resulting image data, and the matrices Q and P denoting backprojection and forward projection operators, respectively. In our case, Q is the weighted filtered backprojection by Stierstorfer et al. [6] and P is the forward projection by Joseph [8]. The iterative reconstruction process can then be described as

00103_PSISDG12304_123042U_page_3_1.jpg

An initial result f0 = Qp is calculated by backprojecting the rebinned and filtered projection data. The initial result f0 is then forward projected and the difference to the initial projection data p is calculated. This difference image is backprojected and subtracted from the previous result f0 using a factor α. This is repeated multiple times, improving the result with every iteration.

Including the proposed motion compensation method into this iterative reconstruction process consists of two parts. First, the motion compensation is included into the backprojection operator Q as described above. Second, the motion also needs to be considered in the forward projection operator P. However, in the forward projection the motion is included again rather than removed, as the forward projected result is compared to the original not motion compensated projection data p. Including the motion in the forward projection operator P is performed similar to the motion compensation in the backprojection operator Q. In the forward projection, we are assuming an implementation that iterates through all positions (p, q) in the projection image P’a(θ, p, q) that should be calculated. For each position (p, q), the corresponding ray through the voxel volume is calculated and the values along its path are accumulated. This is repeated for all projection images P’(θ, p, q). Knowing the position (p, q) the corresponding motion state at this position can be interpolated as described above. The motion state can then be included in the calculations of each ray changing its path through the voxel volume accordingly. The value accumulated along the rays path is written to the initial position (p, q) of the projection image P’(θ, p, q). However, the motion compensation calculations need to be included in every backprojection and every forward projection step of every iteration.

Hence, we propose to use a modification of the iterative reconstruction algorithm used above, as the one described by Sunnegårdh et al. [9], that allows for a more efficient implementation of the proposed motion compensation algorithm. The algorithm stated in equation (2) is adapted by considering the combination of forward projection followed by backprojection to be the sum of a low-pass filter L and a linear operator A that causes artifacts:

00103_PSISDG12304_123042U_page_4_1.jpg

Hence the iterative reconstruction equation (2) can be rewritten to

00103_PSISDG12304_123042U_page_4_2.jpg

as described in more detail by Sunnegårdh et al. [9]. This modification allows to include the motion compensation algorithm only in the first backprojection step, without the need to repeat the motion compensation calculations in every iteration. Furthermore, there is no need to adapt the forward projection operator to consider motion.

3.

RESULTS

To evaluate the proposed motion compensation algorithm the Turbell clock phantom [10] is used, simulating a helical cone beam CT scan, followed by reconstruction with and without motion compensation.

To evaluate the motion compensation the phantom was simulated to be moving along a predefined path. In this case a rigid motion was introduced, moving the entire phantom via translation, rotation, and moving of the rotation center. For the acquisition of each projection image P(α, β, q) a motion state Mα of the phantom was defined, consisting of values for translation, rotation, and the position of the rotation center. The values of all motion states Mα used for the simulation are saved for later use in the motion compensated reconstruction as it is assumed throughout this paper that the motion present during acquisition is known.

Fig. 1 (a) shows the phantom simulated without any motion and reconstructed using the WFBP algorithm [6] as reference. Fig. 1 (b) shows the WFBP reconstruction of the phantom for a simulated scan including motion. The motion included in this example is a translation in x-direction describing a sinus curve with an amplitude of 5mm and one oscillation per scanner rotation. Fig. 1 (c) shows the reconstruction of a scan of the phantom with the same motion, now including the motion compensation method proposed by Schäfer et al. [4] but combined with the WFBP algorithm. Fig. 1 (d) shows the reconstruction result of the scan simulated with the same motion, using our proposed motion compensation method. In this case only one iteration of steps 3) to 5) of our proposed algorithm was performed. It can be seen that for the combination with the WFBP algorithm our proposed method shows significant improvement regarding the motion artifact reduction, when compared to the combination of the WFBP algorithm with Schäfer’s method, even if only one iteration of our proposed algorithm is performed.

Figure 1.

Example reconstructions of the Turbell clock phantom (a) without introducing motion, (b) introducing a translation in x-direction performing a 5mm sinus without motion compensation, (c) introducing the same motion and compensating it with the combination of Schäfer’s method and the WFBP algorithm, and (d) introducing the same motion and compensating it with our proposed method. Greyscale window C: 0 HU, W: 100 HU.

00103_PSISDG12304_123042U_page_4_3.jpg

Fig. 2 shows the results for the two motion compensated iterative reconstruction methods. Again Fig. 2 (a) and (b) show the reference images without and with the introduced motion, the same translation introduced in the experiments shown in Fig. 1. Fig. 2 (c) shows the result for the iterative reconstruction where the motion compensation is included in every backprojection and every forward projection step. Fig. 2 (d) shows the iterative reconstruction result, for the version, where the motion compensation is only included in the first backprojection step.

Figure 2.

Iterative reconstructions of the Turbell clock phantom (a) without introducing motion, (b) introducing a translation in x-direction performing a 5mm sinus without motion compensation, (c) introducing the same motion and compensating it with the motion being considered in every backprojection and every forward projection, and (d) introducing the same motion and compensating it only in the first backprojection. Greyscale window C: 0 HU, W: 100 HU.

00103_PSISDG12304_123042U_page_5_1.jpg

4.

CONCLUSION

With their motion compensation algorithm Schäfer et al. [4] proposed a solution for motion artifacts in CT images that yields very good results, if the motion present during acquisition is known. However, they proposed the combination of their motion compensation with the reconstruction algorithm of Feldkamp, Davis, and Kress [5]. Combining their method with reconstruction algorithms containing a rebinning step like the WFBP algorithm [6] does not yield similarly good results as different motion states are mixed.

In this paper we propose an extension of the method by Schäfer et al. that considers the rebinning process in the backprojection step, making a combination with the WFBP reconstruction algorithm possible. Furthermore, we propose the combination of our motion compensation method with iterative weighted filtered backprojection.

Our experiments show promising motion compensation results both with the WFBP algorithm and the iterative reconstruction algorithm that are comparable to the results achieved with the method by Schäfer et al. combined as suggested with the reconstruction algorithm of Feldkamp, Davis, and Kress.

In our experiments we focused on the introduction and compensation of rigid motion, however the proposed algorithm should also work with non-rigid motion, considering the motion states of individual voxels, rather than the entire phantom.

REFERENCES

[1] 

J. Hahn, H. Bruder, C. Rohkohl, T. Allmendinger, K. Stierstorfer, T. Flohr, and M. Kachelrieß, “Motion compensation in the region of the coronary arteries based on partial angle reconstructions from short-scan CT data,” Medical physics, 44 (11), (2017). https://doi.org/10.1002/mp.2017.44.issue-11 Google Scholar

[2] 

R. Bhagalia, J. D. Pack, J. V. Miller, and M. Iatrou, “Nonrigid registration-based coronary artery motion correction for cardiac computed tomography,” Medical physics, 39 (7Part1), 4245 –4254 (2012). https://doi.org/10.1118/1.4725712 Google Scholar

[3] 

H. Bruder, C. Rohkohl, K. Stierstorfer, and T. Flohr, “Compensation of skull motion and breathing motion in CT using data-based and image-based metrics, respectively,” in Medical Imaging 2016: Physics of Medical Imaging, 97831E (2016). Google Scholar

[4] 

D. Schafer, J. Borgert, V. Rasche, and M. Grass, “Motion-compensated and gated cone beam filtered back- projection for 3-D rotational X-ray angiography,” IEEE Transactions on Medical Imaging, 25 (7), 898 –906 (2006). https://doi.org/10.1109/TMI.2006.876147 Google Scholar

[5] 

L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” J. Opt. Soc. Am. A, 1 612 –619 (1984). https://doi.org/10.1364/JOSAA.1.000612 Google Scholar

[6] 

K. Stierstorfer, A. Rauscher, J. Boese, H. Bruder, S. Schaller, and T. Flohr, “Weighted FBP—a simple approximate 3D FBP algorithm for multislice spiral CT with good dose usage for arbitrary pitch,” Physics in Medicine & Biology, 49 (11), 2209 (2004). https://doi.org/10.1088/0031-9155/49/11/007 Google Scholar

[7] 

J. Sunnegardh, and P.E. Danielsson, “Regularized iterative weighted filtered backprojection for helical cone-beam CT,” Medical physics, 35 (9), 4173 –4185 (2008). https://doi.org/10.1118/1.2966353 Google Scholar

[8] 

P.M. Joseph, “An improved algorithm for reprojecting rays through pixel images,” IEEE transactions on medical imaging, 1 (3), 192 –196 (1982). https://doi.org/10.1109/TMI.1982.4307572 Google Scholar

[9] 

J. Sunnegardh and K. Stierstorfer, “A New Method for Windmill Artifact Reduction,” in The 12th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, 420 –423 (2013). Google Scholar

[10] 

H. Turbell, “Cone-beam reconstruction using filtered backprojection,” Diss. Linköping University Electronic Press,2001). Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nora Steinich, Johan Sunnegårdh, and Harald Schöndube "Motion compensated weighted filtered backprojection considering rebinning process", Proc. SPIE 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography, 123042U (17 October 2022); https://doi.org/10.1117/12.2646567
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Reconstruction algorithms

CT reconstruction

Computed tomography

Image quality

Scanners

Image processing

Medical image reconstruction

Back to Top