Optimizing the Frame Duration for Data-driven Rigid Motion Estimation in Brain PET Imaging
Overview
Authors
Affiliations
Purpose: Data-driven rigid motion estimation for PET brain imaging is usually performed using data frames sampled at low temporal resolution to reduce the overall computation time and to provide adequate signal-to-noise ratio in the frames. In recent work it has been demonstrated that list-mode reconstructions of ultrashort frames are sufficient for motion estimation and can be performed very quickly. In this work we take the approach of using image-based registration of reconstructions of very short frames for data-driven motion estimation, and optimize a number of reconstruction and registration parameters (frame duration, MLEM iterations, image pixel size, post-smoothing filter, reference image creation, and registration metric) to ensure accurate registrations while maximizing temporal resolution and minimizing total computation time.
Methods: Data from F-fluorodeoxyglucose (FDG) and F-florbetaben (FBB) tracer studies with varying count rates are analyzed, for PET/MR and PET/CT scanners. For framed reconstructions using various parameter combinations interframe motion is simulated and image-based registrations are performed to estimate that motion.
Results: For FDG and FBB tracers using 4 × 10 true and scattered coincidence events per frame ensures that 95% of the registrations will be accurate to within 1 mm of the ground truth. This corresponds to a frame duration of 0.5-1 sec for typical clinical PET activity levels. Using four MLEM iterations with no subsets, a transaxial pixel size of 4 mm, a post-smoothing filter with 4-6 mm full width at half maximum, and averaging two or more frames to create the reference image provides an optimal set of parameters to produce accurate registrations while keeping the reconstruction and processing time low.
Conclusions: It is shown that very short frames (≤1 sec) can be used to provide accurate and quick data-driven rigid motion estimates for use in an event-by-event motion corrected reconstruction.
Effects of List-Mode-Based Intraframe Motion Correction in Dynamic Brain PET Imaging.
Tiss A, Chemli Y, Guehl N, Marin T, Johnson K, El Fakhri G IEEE Trans Radiat Plasma Med Sci. 2024; 8(8):950-958.
PMID: 39507127 PMC: 11540417. DOI: 10.1109/trpms.2024.3432322.
Tan W, Wang Z, Zeng X, Boccia A, Wang X, Li Y Med Phys. 2024; 52(1):201-218.
PMID: 39422495 PMC: 11716701. DOI: 10.1002/mp.17437.
Motion-correction strategies for enhancing whole-body PET imaging.
Wang J, Bermudez D, Chen W, Durgavarjhula D, Randell C, Uyanik M Front Nucl Med. 2024; 4.
PMID: 39118964 PMC: 11308502. DOI: 10.3389/fnume.2024.1257880.
Adaptive data-driven motion detection and optimized correction for brain PET.
Revilla E, Gallezot J, Naganawa M, Toyonaga T, Fontaine K, Mulnix T Neuroimage. 2022; 252:119031.
PMID: 35257856 PMC: 9206767. DOI: 10.1016/j.neuroimage.2022.119031.
Einspanner E, Jochimsen T, Harries J, Melzer A, Unger M, Brown R EJNMMI Phys. 2022; 9(1):15.
PMID: 35239047 PMC: 8894542. DOI: 10.1186/s40658-022-00442-6.