Image-guided thermal ablations have become an important therapeutic option for patient with cardiac arrhythmia, it is minimally-invasive and provides better and faster patient recovery. However, to enhance the ablation guidance, the therapist needs to link by image registration the intraoperative images to the high-resolution anatomical preoperative imaging, in which the ablation path has been defined. In this work, we present a convolutional neural networks (CNNs) framework for transesophageal ultrasound/computed tomography image registration to solve the problem of high computation time of the classical iterative methods, which is not suitable for a real-time application. We propose the following process: we first pass the input moving and fixed image pairs through a siamese architecture consisting of convolutional layers, thus extracting features of moving and fixed maps analogous to dense local descriptors, then matching the feature maps, and finally pass this correspondence feature map into a registration network, which directly outputs the registration parameters set of the rigid registration. Accuracy of the registration is quantified based on the Target Registration Error (TRE) for specific anatomical landmarks. Results of the registration process show a median TRE of 2.2 mm for all the fiducial points, and the registration computation time was around 3 ms comparing to the classic iterative methods which takes around 70 seconds for one image pair. In our future work we are going to perform our approach on 2D/3D learning-based registration to refine the estimation of the transesophageal probe pose in the 3D preoperative volume.
|