School authors:
External authors:
- Matteo Cesario ( King's College London , Maastricht University )
- Simon J. Littlewood ( King's College London )
- J. Nadel ( King's College London , Heart Research Institute , St Vincents Hospital Sydney )
- Thomas J. Fletcher ( King's College London )
- Anastasia Fotaki ( King's College London )
- Carlos Castillo-Passi ( Pontificia Universidad Catolica de Chile , King's College London )
- Reza Hajhosseiny ( Imperial College London , King's College London )
- Jim Pouliopoulos ( St Vincents Hospital Sydney , Victor Chang Cardiac Research Institute )
- A. Jabbour ( St Vincents Hospital Sydney , Victor Chang Cardiac Research Institute )
- Ruperto Olivero ( Vall d'Hebron Institut de Recerca (VHIR) , Autonomous University of Barcelona , CIBERCV , Vall dHebron Hosp Univ )
- Jose Rodriguez-Palomares ( Vall d'Hebron Institut de Recerca (VHIR) , Autonomous University of Barcelona , CIBERCV , Vall dHebron Hosp Univ )
- M. Eline Kooi ( Maastricht University )
- Jim Pouliopoulos ( St Vincents Hospital Sydney , Victor Chang Cardiac Research Institute )
- Ruperto Olivero ( Vall d'Hebron Institut de Recerca (VHIR) , Autonomous University of Barcelona , Instituto de Salud Carlos III , Vall dHebron Hosp Univ )
Abstract:
Background: Magnetic resonance angiography (MRA) is an important tool for aortic assessment in several cardiovascular diseases. Assessment of MRA images relies on manual segmentation, a time-intensive process that is subject to operator variability. We aimed to optimize and validate two deep-learning models for automatic segmentation of the aortic lumen and vessel wall in high-resolution electrocardiogram-triggered free-breathing respiratory motion-corrected three-dimensional (3D) bright- and black-blood MRA images. Methods: Manual segmentation, serving as the ground truth, was performed on 25 bright-blood and 15 blackblood 3D MRA image sets acquired with the iT2PrepIR-BOOST sequence (1.5T) in thoracic aortopathy patients. The training was performed with no new U-Net (nnUNet) for bright-blood (lumen) and black-blood image sets (lumen and vessel wall). Training consisted of a 70:20:10% (17/25:5/25:3/25 datasets) training:validation:testing split. Inference was run on datasets (single vendor) from different centers (UK, Spain, and Australia), sequences (iT2PrepIR-BOOST, T2 prepared coronary magnetic resonance angiography [CMRA], and time resolved angiography with interleaved stochastic trajectories [TWIST] MRA), acquired resolutions (from 0.9-3 mm3), and field strengths (0.55T, 1.5T, and 3T). Predictive measurements comprised Dice similarity coefficient (DSC) and Intersection over Union (IoU). Postprocessing (3D slicer) included centreline extraction, diameter measurement, and curved planar reformatting (CPR). Results: The optimal configuration was the 3D U-Net. Bright-blood segmentation at 1.5T on iT2PrepIR-BOOST datasets (1.3 and 1.8 mm3) and 3D CMRA datasets (0.9 mm3) resulted in DSC >= 0.96 and IoU >= 0.92. For bright-blood segmentation on 3D CMRA at 0.55T, the nnUNet achieved DSC and IoU scores of 0.93 and 0.88 at 1.5 mm3, and 0.68 and 0.52 at 3.0 mm3, respectively. DSC and IoU scores of 0.89 and 0.82 were obtained for CMRA image sets (1 mm3) at 1.5T (Barcelona dataset). DSC and IoU scores of the BRnnUNet model were 0.90 and 0.82, respectively, for the contrast-enhanced dataset (TWIST MRA). Lumen segmentation on black-blood 1.5T iT2PrepIR-BOOST image sets achieved DSC >= 0.95 and IoU >= 0.90, and vessel wall segmentation resulted in DSC >= 0.80 and IoU >= 0.67. Automated centreline tracking, diameter measurement, and CPR were successfully implemented in all subjects. Conclusion: Automated aortic lumen and wall segmentation on 3D bright-and black-blood image sets demonstrated excellent agreement with ground truth. This technique demonstrates a fast and comprehensive assessment of aortic morphology with great potential for future clinical application in various cardiovascular diseases.
| UT | WOS:001531383800002 |
|---|---|
| Number of Citations | 0 |
| Type | |
| Pages | |
| ISSUE | 2 |
| Volume | 27 |
| Month of Publication | WIN |
| Year of Publication | 2025 |
| DOI | https://doi.org/10.1016/j.jocmr.2025.101923 |
| ISSN | |
| ISBN |