Achieve Fairness in AI-Assisted Mobile Healthcare Apps through Unsupervised Federated Learning
Project Number5R01EB033387-03
Contact PI/Project LeaderHU, JINGTONG
Awardee OrganizationUNIVERSITY OF PITTSBURGH AT PITTSBURGH
Description
Abstract Text
Deep learning models have been deployed in an increasing number of edge and mobile devices to provide
healthcare in our life, from mobile dermatology assistant, mobile eye cancer (leukoria) detection, emotion
detection, to comprehensive vital signs monitoring. All these techniques rely on visual assistance of the
cameras that come with mobile devices and inevitably lead to different levels of fairness concerns, due to the
inherent gender, race and/or socioeconomic bias in existing AI models. Compounding contributing factors
include a lack of medical professionals from marginalized communities, inadequate information about those
communities, and socioeconomic barriers to participating in data collection and research. In the absence of a
diverse population that reflects that of the U.S. population, potential safety or efficacy considerations could be
missed. What is worse, with inadequate data, AI algorithms could misdiagnose underrepresented people,
leading to increasing health care disparities. Therefore, there is a critical need to address racial, skin color, and
socioeconomic inequities in AI-assisted mobile diagnosis.
This project will address the fairness issue in mobile AI assistants, using dermatology diagnosis and skin
color inequity as the study case. Instead of collecting equitable demographic dataset in a centralized way, it will
develop a federated on-device learning framework for participation inclusion, selective data contribution, and
continuous personalization. The framework can continuously learn from new users’ data as they use the
mobile apps with little human supervision. An unsupervised federated learning (FL) framework will be
developed with heterogeneous hardware (high-end and low-end) and models such that users from all
socioeconomic status can participate in the research. While various FL techniques have been developed, how
to implement unsupervised FL with both hardware and model heterogeneity is not clear. It is also essential to
achieve this goal with as little human supervision as possible since it is impractical to have a doctor constantly
label the images when users are using these AI-based apps. In addition, even with FL, data from
predominating population will still dominate the data collected. Non-uniform data selection techniques will be
developed to automatically weigh the importance of different data for maximum fairness. Finally, not all neural
networks exhibit the same inherent fairness even with the same biased data. A fairness-aware neural
architecture search framework will be developed to find the networks that can achieve the most fairness.
The expected outcome of this project is a holistic framework to mitigate the impacts of inequity by
improving the inference performance for minorities. The developed techniques will be implemented as mobile
apps with heterogeneous smart phones and evaluated with both public dataset and patients at UPMC. Data
and code will be made available for public research. The developed techniques can be easily extended to all
AI-assisted diagnosis and account for the inequity in various aspects such as age, sex, racial, etc.
Public Health Relevance Statement
Project Narrative
Deep learning models have been deployed in an increasing number of edge and mobile devices to
provide diagnosis. However, all of these techniques rely on visual assistance of the cameras and can lead
to different levels of Fairness concerns. This project aims to address this new technology inequity by
developing a distributed, federated on-device machine learning framework to learn from diversified
data source with minimal human supervision while preserving privacy.
National Institute of Biomedical Imaging and Bioengineering
CFDA Code
286
DUNS Number
004514360
UEI
MKAGLD59JRL1
Project Start Date
15-August-2022
Project End Date
30-April-2026
Budget Start Date
01-May-2024
Budget End Date
30-April-2025
Project Funding Information for 2024
Total Funding
$413,212
Direct Costs
$332,535
Indirect Costs
$80,677
Year
Funding IC
FY Total Cost by IC
2024
National Institute of Biomedical Imaging and Bioengineering
$413,212
Year
Funding IC
FY Total Cost by IC
Sub Projects
No Sub Projects information available for 5R01EB033387-03
Publications
Publications are associated with projects, but cannot be identified with any particular year of the project or fiscal year of funding. This is due to the continuous and cumulative nature of knowledge generation across the life of a project and the sometimes long and variable publishing timeline. Similarly, for multi-component projects, publications are associated with the parent core project and not with individual sub-projects.
No Publications available for 5R01EB033387-03
Patents
No Patents information available for 5R01EB033387-03
Outcomes
The Project Outcomes shown here are displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed are those of the PI and do not necessarily reflect the views of the National Institutes of Health. NIH has not endorsed the content below.
No Outcomes available for 5R01EB033387-03
Clinical Studies
No Clinical Studies information available for 5R01EB033387-03
News and More
Related News Releases
No news release information available for 5R01EB033387-03
History
No Historical information available for 5R01EB033387-03
Similar Projects
No Similar Projects information available for 5R01EB033387-03