The FDA shared draft guidance on AI-enabled medical devices, addressing cybersecurity risks and providing recommendations for safe development.
On January 6, 2025 the Food and Drug Administration (FDA) released draft guidance on the creation and marketing of AI-enabled medical devices. The FDA has been approving more and more medical devices that use AI (artificial intelligence) since 1995, leading to their policy that “encourages the development of innovative, safe, and effective medical devices, including devices that incorporate Artificial Intelligence and Machine Learning (AI/ML).” The Administration has approved 1,016 AI-based medical appliances as of December 20, 2024, which is unsurprising considering how rapidly AI has advanced in recent years.
Just as rapidly as it has evolved, AI has gained popularity for its ability to do a wide range of things - saving time on repetitive tasks across industries. The AI-powered chatbot ChatGPT has more than 200 million weekly active users, and 92% of Fortune 500 companies are using OpenAI products. This surge of AI use has caused concern as its risks continue to be exposed. Bias in AI systems can exacerbate inequality and the tool has a number of privacy hazards with how the data for machine learning is collected and the possibility of sensitive data being extracted.
In 2023 and 2024, many federal organizations took steps to address these inherent risks. In November 2023, the Cybersecurity and Infrastructure Security Agency (CISA) published a 2023-2024 Roadmap for AI that asked for closer monitoring of AI systems to check for security and ethical practices, and set up a pathway to counter malicious use of AI on critical infrastructure. CISA collaborated with the United Kingdom’s National Cyber Security Centre (NCSC) to make guidelines for secure AI system development and in 2024, the NCSC shared a report on the threat that AI posed to cybersecurity. The EU even legislated an AI Act to this effect. The FDA’s guidance for medical devices with AI is not their first time acknowledging this technology, but it is their first time setting recommendations for the creators of the technology.
The seventh chapter of the guidebook is centered on cybersecurity AI risks, namely
- Data poisoning - intentionally inserting or modifying data to change outcomes
- Model inversion/stealing - replicating models with the help of falsified data, potentially leading to copyright and privacy violations
- Model evasion - deliberately inputting modified samples to change classifications in the model, effectively limiting the accuracy of the device and opening it to exploitation)
- Data leakage - sensitive machine training information becoming exposed
- Overfitting - stretching the model to the point it can no longer conform to modified patient data
- Model bias - training data can be modified to emphasize biases that can later be exploited
- Performance Drift - distorting the the spread of the underlying data will weaken the model’s performance, making the predictions less accurate
Itamar Golan, the co-founder and CEO of Prompt Security and a member of Open Web Application Security Project (OWASP) Top 10 for large language model-based apps described the devastating impact that some of these risks could bring, if left unchecked: “Imagine a medical device using an LLM trained on a specific setup that could trigger it to produce manipulated outputs based on certain inputs. For example, consider a pacemaker that relies on an LLM, receiving data from both the body and the cloud. If this LLM were poisoned during training, it could behave maliciously - such as reacting badly to a cloud-delivered string with terms like 'male', 'Jewish' or 'American'. This is not theoretical but a real attack scenario."
Understanding the potential damage of a cyber attack on these medical devices highlights the value of the guidance, even if it only consists of recommendations and no mandates. The director of the Digital Health Center of Excellence at the FDA’s Center for Devices and Radiological Health, Troy Tazbaz lauded the publication in the press release: “Today’s draft guidance brings together relevant information for developers, shares learnings from authorized AI-enabled devices and provides a first point-of-reference for specific recommendations that apply to these devices, from the earliest stages of development through the device’s entire life cycle.”
The guidance is available for public comment until April 7, 2025, and the FDA will also discuss the guidance in a webinar of February 18, 2025.