Abstract
Free full text
MR to CT Registration of Brains using Image Synthesis
Abstract
Computed tomography (CT) is the standard imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.
1. INTRODUCTION
CT is the primary imaging modality for radiation therapy planning and dose computation. Accurate segmentation of the target structures and tumors based on CT alone is challenging due to insu cient image contrast. To compensate, MRI is used in conjunction with CT for the target and tumor delineation. Although MRI shows excellent soft tissue contrast with high SNR, MR images show geometric image distortions and do not provide electron density information needed for dose computation. Therefore, an accurate registration between MRI and CT is crucial for accurate radiotherapy planning and delivering the prescribed dose to the patient.1 This is particularly critical in areas such as the head and neck where the planning target volume must spare critical structures.
Several MR to CT registration techniques have been proposed in the past. Mutual information (MI) is a common cost function used to drive a registration.2 MR to CT methods typically require users to input landmarks or draw region of interests on the CT and MR, and the MR is then deformably registered using the segmentations of the regions3 or landmarks.4 It has also been shown that the fully automatic methods perform comparably or worse than semi-automatic methods,5 although any manual intervention is likely to be prone to reproducibility error and is time consuming.
Unlike CT, MR intensities do not possess a calibrated intensity scale. Thus MR intensities obtained from di erent scanners or imaging sessions usually have di erent scales and probability distributions. As MI depends on the joint distribution of the images, it sometimes leads to local maxima,6 especially when the intensity scales are widely di erent. An example is shown in Fig. 1, where MR and CT images of the same subject were registered by rigid registration followed by a b-spline registration using commercial software (VelocityAI, Velocity Medical Solutions, Atlanta, GA).7 We also computed the registration using a state-of-the-art MI-based di eomorphic deformable registration algorithm SyN.8 Both methods did not properly register soft tissues such as ventricles (red arrow) while registering the skull reasonably well. The soft tissue registration was improved with the proposed method.
In this paper, we propose a fully automated framework to register MR to CT via a synthetic CT (sCT) that contains the same intensity scale as a CT image. We use a registered MR and CT image pair as atlas which is often done in a semi-automated way. Note that the atlas and subject images are not registered. The atlas and subject images are first decomposed into patches, generating corresponding “patch clouds”. The subject patch cloud is matched to the atlas cloud using a number of Gaussian mixture models by incorporating the idea of coherent point drift.9 The sCT image is obtained as a maximum likelihood estimate from the model. The sCT is in the same space as the subject MR and contains CT-like intensities, it is deformably registered to the original subject CT by maximizing cross-correlation (CC) using SyN. The subject MRI is then registered to the CT by applying the corresponding deformation. We compare the proposed framework with two methods, a b-spline based registration (VelocityAI7) and SyN.8 In both cases, the MR is registered to the CT using MI as a similarity metric. Similar inter-modality analyses has been previously explored in a MR image registration context.10–12 We emphasize that our sCT images are used for the sole purpose of registration improvement and are treated as an intermediate result. They are not meant to be used by radiologists for any diagnostic purposes.
2. METHOD
2.1 Atlas and patch description
We define the atlas as a pair of co-registered images {a1, a2} having the same resolution with contrasts C1 and C2, respectively. In this paper, C1 is MR and C2 is CT. The subject, also having the same resolution, is denoted by b1 and is of contrast C1. Both a1 and b1 are normalized such that their WM peak intensities are at unity. WM peak intensity is found from the corresponding histogram. At each voxel of an image, 3D patches—size p × q × r—are stacked into 1D vectors of size d × 1, with d = pqr. Atlas C1 and C2 contrast patches are denoted by yj and vj, respectively, where j = 1, … , M. Subject b1 yields C1 contrast patches which are denoted by xi, i = 1, … , N. The unobserved C2 contrast subject patches of b1 are denoted by ui. N and M are the number of non-zero voxels in the subject and the atlas, respectively. We combine the patch pairs as 2d × 1 vectors
2.2 Contrast synthesis algorithm
The subject and atlas C1 patches represent a pattern of intensities that are scaled to a similar intensity range. Therefore, an atlas patch that has a pattern of intensities that is similar to a given subject patch might arise from the same distribution of tissues. In that case, the C2 patch in the atlas can be expected to represent an approximate C2 contrast of the subject in that patch.13 One could naively find a single patch within the atlas that is close (or closest) to the subject patch and then use the corresponding C2 atlas patch directly in synthesis. A slightly more complex way to use this is to find a sparse collection of atlas patches that can better reconstruct the subject patch, then use the same combination of C2 patches to reconstruct a synthetic image.14–16 Neither of these approaches uses the C2 atlas patches in selecting the combination. A joint dictionary learning using both C1 and C2 contrast patches has been proposed in a registration framework.17 In this paper, we propose a synthesis framework where we want to combine a small number of patches and take advantage of the C2 patches in the atlas while selecting the C1 patches. This idea of pattern matching su ers if there are pathologies (e.g., tumors or lesions) in the subject which is not present in the atlas. Nevertheless, since the synthetic images are used to improve registration and treated as intermediary result, we synthesize C2 contrast patches with matching atlas patches irrespective of their underlying biology.
We propose a probabilistic model that specifically relates subject C1 patches to atlas C1 patches. Since atlas patches may not be plentiful enough to closely resemble all subject patches, we consider all convex combinations of pairs of atlas patches. We then postulate that subject patches are random vectors whose probability densities are Gaussian with means given by an unknown convex combinations of pairs of atlas patches and with unknown covariance matrices. This framework captures the notion that a convex combination of a small number of atlas patches (just two in this paper) could be used to describe a subject patch. In order to tie the C1 and C2 contrasts together, we further assume that the subject’s unknown C2 patch is a random vector whose mean is the same convex combination of the same two atlas patches associated with the C1 contrast, with a covariance matrix that can be di erent, in principle.
This can be summarized succinctly by considering a subject patch pi and two associated atlas patches qj and qk. Then pi is assumed to arise from the Gaussian distribution,
Where Σt is a covariance matrix associated with the atlas patches and t Ψ, where Ψ is the set of all pairs of atlas patch indices, and αit (0, 1) is a mixing coe cient for the ith subject patch to the tth atlas patch-pairs. In essence, each subject patch follows an (
We define zit as the indicator function that pi comes from a GMM of the t = {j, k}th atlas pair, ΣtΨ zit = 1 ∀i, zit {0, 1}. Then the probability of observing pi can be written as,
where hit = pi − αitqj − (1 − αit)qk, t ≡ {j, k}. The prior probability of having pi originating from the distribution of the tth pair is P (zit = 1 |Σt, αit). Without any knowledge of xi, this prior should ideally depend on a classification of the patch cloud Q. However, we avoid any classification of patches by assuming a uniform prior. We have experimentally found that a full-rank Σt is often less robust to estimate. Instead, we assume it to be separable and block diagonal,
indicating that the variations of each voxel in a patch are the same around the means, although individual voxels can be of di erent tissue. Thus the joint probability becomes,
The set of parameters are Θ = {σ1t, σ2t, αit; i = 1, … , N, t Ψ}, and the maximum likelihood estimators of Θ are found by maximizing Eqn. 3 using EM. The EM algorithm be outlined as,
E-step: to find new update Θ(m+1) at the mth iteration, compute the expectation Q(Θ(m+1)|Θ(m)) = E[log P(P, Z|Θ(m+1))|X, Θ(m)].
M-step: find new estimates Θ(m+1) based on the previous estimates using the following equation Θ(m+1) = arg maxΘ(m+1) Q(Θ(m+1)Θ(m)).
The E-step requires the computation of E(zit|P, Θ(m)) = P (zit|P, Θ(m). Given that zit is an indicator function, it can be shown that
At each iteration, we replace the value of ui with its expectation. The M-step involves the maximization of the log of the expectation w.r.t. the parameters given the current
It should be noted that F (0) = −1, F (1) = 1, ∀ A, B; thus, there is always a feasible
The imaging model is valid for those atlas and subject patches that are close in intensity. Using a non-local type of criterion,18 for every subject patch xi, we choose a feasible set of L atlas patches such that they are the L nearest neighbors of xi. Thus the ith subject patch follows an
3. RESULTS
We experimented on images from ten brain cancer patients with various sizes and shapes of tumors, each having one MR and CT acquisition. A di erent subject was chosen as the atlas, for which the MRI was carefully registered to the CT using a commercial software.7 This registered MR-CT pair was used as the atlas {a1, a2}. For each of the ten subjects, we registered the MRI to CT using b-spline registration7 and SyN.8 We also generated the sCT image from the MRI (b1), registered (SyN) sCT to the original CT and applied the deformation to the MRI to get registered MRI. An example of the atlas {a1, a2}, subject MR b1, registration results from b-spline, SyN, sCT, and the corresponding deformed MR images from their registrations are shown in Fig. 2.
Fig. 3 top image shows absolute values of correlation and MI between CT and the registered MR brain volumes of ten subjects. The brain volumes are obtained from skull-stripping19 masks of the MR images. Both MI and correlations increase significantly (p-value < 0.05) after registration via sCT, indicating significant improvement in MR-CT registration of the brains. Another registration metric is the variability of CT intensities for di erent tissue classes. For each subject, we segmented the registered MRI into three classes, cerebro-spinal fluid (CSF), gray matter (GM) and white matter (WM), using an atlas based method.20 The mean and standard deviations of CT intensities for each of the classes are plotted in Fig. 3 bottom row. For every tissue, standard deviations from sCT deformed MRI reduce significantly (p-value < 0.05) in comparison to both the b-spline and SyN registered MRIs. Misregistration causes the inclusion of other structures with di erent intensities in the segmentation, which leads to higher variability. The sCT registration also gives the closest mean CT intensities to the truth, 15, 40, and 25 Hounsfield units for CSF, GM, and WM, respectively.21 A visual inspection in Fig. 2 of the subject MRI shows the registration improvement near the ventricles (red arrow) and frontal gray matter (yellow arrow).
4. DISCUSSION AND CONCLUSION
We have proposed a novel framework for registering MRI to CT using CT synthesis as an intermediate step. Since the sCT has the same intensity scale as CT, we register sCT to the subject CT using CC as a similarity metric instead of directly registering MRI to CT based on MI. For the current work, we created the atlas by registering MR and CT using a semi-automatic method. We note that this atlas MR-CT registration can be improved using the proposed framework (i.e., synthesize and register). However, for the current work, we visually checked the atlas and found little registration error. Although the sCT is only used for improving registration in this study, the synthesis quality can be improved by using multiple atlases and di erent patch sizes or shapes. The capability of accurately synthesizing CT from MRI will allow us to directly compute dose on sCT and enable solely MRI-based radiotherapy planning.
Acknowledgments
This work was supported by the NIH/NIBIB under grants 1R01EB017743 and 1R21EB012765.
REFERENCES
Full text links
Read article at publisher's site: https://doi.org/10.1117/12.2043954
Read article for free, from open access legal sources, via Unpaywall: https://europepmc.org/articles/pmc4104818?pdf=render
Citations & impact
Impact metrics
Citations of article over time
Smart citations by scite.ai
Explore citation contexts and check if this article has been
supported or disputed.
https://scite.ai/reports/10.1117/12.2043954
Article citations
TAI-GAN: A Temporally and Anatomically Informed Generative Adversarial Network for early-to-late frame conversion in dynamic cardiac PET inter-frame motion correction.
Med Image Anal, 96:103190, 07 May 2024
Cited by: 0 articles | PMID: 38820677
TCGAN: a transformer-enhanced GAN for PET synthetic CT.
Biomed Opt Express, 13(11):6003-6018, 24 Oct 2022
Cited by: 2 articles | PMID: 36733758 | PMCID: PMC9872870
MR-CT multi-atlas registration guided by fully automated brain structure segmentation with CNNs.
Int J Comput Assist Radiol Surg, 18(3):483-491, 05 Nov 2022
Cited by: 1 article | PMID: 36334164 | PMCID: PMC9939492
Virtual contrast enhancement for CT scans of abdomen and pelvis.
Comput Med Imaging Graph, 100:102094, 26 Jul 2022
Cited by: 0 articles | PMID: 35914340 | PMCID: PMC10227907
Abdominopelvic MR to CT registration using a synthetic CT intermediate.
J Appl Clin Med Phys, 23(9):e13731, 03 Aug 2022
Cited by: 1 article | PMID: 35920116 | PMCID: PMC9512351
Go to all (23) article citations
Similar Articles
To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.
Prospective quantitative quality assurance and deformation estimation of MRI-CT image registration in simulation of head and neck radiotherapy patients.
Clin Transl Radiat Oncol, 18:120-127, 24 Apr 2019
Cited by: 14 articles | PMID: 31341987 | PMCID: PMC6630195
Abdominopelvic MR to CT registration using a synthetic CT intermediate.
J Appl Clin Med Phys, 23(9):e13731, 03 Aug 2022
Cited by: 1 article | PMID: 35920116 | PMCID: PMC9512351
Multiatlas approach with local registration goodness weighting for MRI-based electron density mapping of head and neck anatomy.
Med Phys, 44(7):3706-3717, 01 Jun 2017
Cited by: 14 articles | PMID: 28444772 | PMCID: PMC5510622
Funding
Funders who supported this work.
NIBIB NIH HHS (2)
Grant ID: R21 EB012765
Grant ID: R01 EB017743