Abstract
The integration of Artificial Intelligence (AI) into healthcare holds immense potential to enhance diagnostic processes and improve treatment plans by leveraging valuable patient data. However, the sensitive nature of medical data raises significant privacy concerns, including risks of discrimination, erosion of trust, and misuse of personal information. This thesis explores the intersection of privacy, ethics, and AI in healthcare, aiming to develop a framework for ethical and secure AI-based medical applications leveraging current best practices to promote responsible development within the regulatory landscape of the GDPR, AI Act and similar legislations. Through a literature review, legal analysis, and expert interviews, the research identifies key technical controls, governance practices, and regulatory standards necessary to ensure data protection. Based on this findings, this thesis proposes a privacy-conscious domain-specific medical data lifecycle (MDLC) and an auditing framework (UMAPER) tailored for AI systems modelled after the MDLC. The findings underscore the importance of balancing AI innovation with rigorous privacy standards, providing policy makers, domain experts and developers a potential tool for building trust-worthy and compliant medical AI systems. This work contributes to the ongoing discourse on responsible AI development in high-stakes domains like healthcare.