Esma Mansouri-Benssassi, Simon Rogers, S. Reel
Nov 10, 2021
Citations
0
Influential Citations
4
Citations
Journal
Heliyon
Abstract
Objective Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research environments (TREs) provide safe and secure environments in which researchers can access sensitive personal data and develop Artificial Intelligence (AI) and Machine Learning models. However currently few TREs support the use of automated AI-based modelling using Machine Learning. Early attempts have been made in the literature to present and introduce privacy preserving machine learning from the design point of view [1]. However, there exists a gap in the practical decision-making guidance for TREs in handling models disclosure. Specifically, the use of machine learning creates a need to disclose new types of outputs from TREs, such as trained machine learning models. Although TREs have clear policies for the disclosure of statistical outputs, the extent to which trained models can leak personal training data once released is not well understood and guidelines do not exist within TREs for the safe disclosure of these models.