Fast and Robust Deep Learning for Medical imaging: Segmentation and Registration methods invariant to contrast and resolution
Project Number5R01EB033773-02
Former Number1R01EB033773-01
Contact PI/Project LeaderDALCA, ADRIAN
Awardee OrganizationMASSACHUSETTS GENERAL HOSPITAL
Description
Abstract Text
Project Summary
Title
Fast and Robust Deep Learning for Medical imaging: Segmentation and Registration methods invariant to con-
trast and resolution.
Summary
Segmentation and registration are critical tasks in a broad range of scientific studies, and have been widely
implemented in imaging analysis frameworks. Unfortunately, most existing tools suffer from two important draw-
backs: they are computationally demanding, and most often impose limiting restrictions on the type of image
data that can be accurately analyzed. While the former drawback has been recently addressed through the use
of deep neural networks that execute rapidly once trained, these systems amplify the latter, which remains a
major restriction. This typically means that tools only yield accurate results on a very limited range of scan types,
most commonly those that they were trained on and are susceptible to repeating bias present in those data. For
segmentation this is particularly burdensome as training frequently requires manually labeled representations for
different types of input data.
The constraint of image type greatly restricts image analysis and its downstream impact in an array of important
domains. For example, in research imaging, it limits multi-site and longitudinal studies that must hold acquisition
protocols constant or attempt to harmonize protocols across different acquisition platforms, and even this process
has limited success when the differences are too extreme (e.g. across field strength). Investigators often need
to adjust, redesign, or retrain the tools for their intended tasks and available images and manual labels, leading
to more barriers to analysis. There is also a wealth of knowledge to be gained from analyzing clinically-sourced
MR images, which could lead to better understanding of the biological underpinnings of many disease processes
and a more precise quantification of the efficacy of therapeutic interventions. However, scans acquired as part
of routine clinical care are often of diverse contrast, significantly lower resolution, and lower quality due to noise
or subject motion. There are few if any publicly available tools that can handle the wide range of acquisition
variability in typical clinical imaging.
We propose to design and distribute machine learning based tools to completely remove these barriers. We will
develop imaging segmentation and registration deep learning methods that retain their accuracy given unpro-
cessed scans of most contrasts or resolution without the need for training data or network fine-tuning to each
data variation. We will build on our recent work in learning-based methods for segmentation, registration, syn-
thesis, and augmentation to leverage the speed of neural networks, the richness of MR physics models, and the
generalizability of probabilistic Bayesian models. We will validate these tools on a large comprehensive multi-site
study incorporating new manual labeling of scans spanning different age, sex, and race . Finally, we will deploy
them to analyze anatomy and white matter lesions in a retrospective stroke cohort. The novel techniques will be
implemented both as standalone open source software as well as part of FreeSurfer analysis package, making
them freely available to thousands of method developers as well as science and clinical researchers.
Public Health Relevance Statement
Narrative
The proposed methods will enable scientists and clinical investigators to tackle questions with imaging datasets
that could not be properly analyzed previously. This will directly improve the research capabilities of thousands of
clinicians and science investigators of human disease. This will lead to a sustained positive impact in basic and
clinical research.
National Institute of Biomedical Imaging and Bioengineering
CFDA Code
286
DUNS Number
073130411
UEI
FLJ7DQKLL226
Project Start Date
06-September-2023
Project End Date
31-August-2027
Budget Start Date
01-September-2024
Budget End Date
31-August-2025
Project Funding Information for 2024
Total Funding
$561,881
Direct Costs
$343,533
Indirect Costs
$218,348
Year
Funding IC
FY Total Cost by IC
2024
National Institute of Biomedical Imaging and Bioengineering
$561,881
Year
Funding IC
FY Total Cost by IC
Sub Projects
No Sub Projects information available for 5R01EB033773-02
Publications
Publications are associated with projects, but cannot be identified with any particular year of the project or fiscal year of funding. This is due to the continuous and cumulative nature of knowledge generation across the life of a project and the sometimes long and variable publishing timeline. Similarly, for multi-component projects, publications are associated with the parent core project and not with individual sub-projects.
No Publications available for 5R01EB033773-02
Patents
No Patents information available for 5R01EB033773-02
Outcomes
The Project Outcomes shown here are displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed are those of the PI and do not necessarily reflect the views of the National Institutes of Health. NIH has not endorsed the content below.
No Outcomes available for 5R01EB033773-02
Clinical Studies
No Clinical Studies information available for 5R01EB033773-02
News and More
Related News Releases
No news release information available for 5R01EB033773-02
History
No Historical information available for 5R01EB033773-02
Similar Projects
No Similar Projects information available for 5R01EB033773-02