FEDERATED LEARNING MODELS FOR PRIVACY-PRESERVING DATA SHARING AND SECURE ANALYTICS IN HEALTHCARE INDUSTRY

Authors

  • Tonoy Kanti Chowdhury Master of Science in Information Technology, Washington University of Science and Technology, USA Author
  • Sai Praveen Kudapa Stevens Institute of Technology, New Jersey, USA Author

DOI:

https://doi.org/10.63125/c2dzn006

Keywords:

Federated Learning, Privacy Preservation, Secure Analytics, Healthcare Data, Data Sharing

Abstract

This study investigated the development and evaluation of Federated Learning Models for Privacy-Preserving Data Sharing and Secure Analytics in the Healthcare Industry, presenting a comprehensive quantitative framework to balance privacy, security, and utility in distributed medical data environments. Federated learning enables multiple institutions—such as hospitals, laboratories, and research centers—to collaboratively train shared models without transferring raw patient data, thereby maintaining compliance with global privacy regulations while supporting large-scale analytics. The research compared four configurations: a centralized baseline, federated learning with secure aggregation, federated learning with secure aggregation and differential privacy, and a robustness-enhanced federated model with adaptive aggregation. Empirical analysis across imaging, electronic health record, and wearable sensor datasets demonstrated that federated configurations achieved non-inferior predictive accuracy (AUROC range: 0.917–0.931) compared with the centralized model (AUROC = 0.936) while significantly improving privacy and security indicators. The membership inference advantage declined from 42.5% in centralized settings to below 10% in privacy-enhanced federated models, confirming reduced re-identification risk. Regression and correlation analyses established statistically significant relationships between privacy budgets, encryption depth, and model performance (R² = 0.709; p < 0.001), validating measurable trade-offs between confidentiality and predictive utility. Reliability and validity testing (Cronbach’s α > 0.88) confirmed the internal consistency and reproducibility of quantitative constructs. Findings revealed that privacy-preserving federated learning can achieve secure, efficient, and equitable performance across heterogeneous healthcare institutions with minimal computational overhead (<12%). The study contributes an empirically validated, statistically rigorous model for privacy-preserving healthcare analytics that aligns with international data protection frameworks such as GDPR and HIPAA. By quantifying privacy, security, and performance as measurable system properties, this research establishes federated learning as a practical, ethical, and scalable paradigm for responsible artificial intelligence in healthcare.

 

Downloads

Published

2024-12-28

How to Cite

Tonoy Kanti Chowdhury, & Sai Praveen Kudapa. (2024). FEDERATED LEARNING MODELS FOR PRIVACY-PRESERVING DATA SHARING AND SECURE ANALYTICS IN HEALTHCARE INDUSTRY. International Journal of Business and Economics Insights, 4(4), 91-133. https://doi.org/10.63125/c2dzn006

Cited By: