Cybersecurity and AI in Medical Devices: Navigating the New Frontier

At both Firefinch Software and CS Life Sciences we are often asked about the integration of artificial intelligence into medical devices. From AI-powered diagnostic imaging systems to intelligent patient monitoring devices, these technologies are revolutionising how medical professionals deliver care. However, this digital transformation brings unprecedented cybersecurity challenges that life sciences professionals must understand and address.
Firefinch Software develop custom software across the life sciences, including medical devices. Firefinch developers are seeing increasing requests for AI systems in data pipelines and comprehensive guidance on how to keep data secure and mitigate risks.
CS Lifesciences guides innovators through the complexities of regulatory compliance to successfully reach their target markets by providing end to end regulatory and quality services.
The Convergence of AI and Cybersecurity in Healthcare
With over 1,000 AI-enabled medical devices now authorised by the FDA, the intersection of AI and cybersecurity has become a critical consideration for medical device manufacturers, healthcare providers, and regulatory bodies. The stakes are particularly high: more than half of connected medical devices in hospitals contain critical vulnerabilities, and 73% of IV pumps – the most common healthcare IoT (internet of things) device – have vulnerabilities that could jeopardise patient safety.
The AI Revolution in Medical Devices
Current Applications and Growth Trajectory
AI-enabled medical devices span multiple clinical domains, from diagnostic imaging systems that can detect anomalies faster than human operators to predictive analytics tools that enhance patient monitoring capabilities.
The rapid adoption reflects AI’s potential to transform healthcare delivery. However, this acceleration often outstrips the implementation of robust security measures, as manufacturers prioritise functionality and speed to market over comprehensive cybersecurity.
So how can medical device companies ensure they don’t get left behind, without opening themselves – and their patients – to unnecessary risk?
Types of AI devices at risk
Any medical device that integrates AI algorithms into its data pipeline faces potential cybersecurity threats. The most vulnerable categories include:
- Diagnostic and Imaging Systems: AI-powered radiology systems, diagnostic software, and medical imaging devices that process and interpret medical data. These systems are particularly susceptible to adversarial attacks that can manipulate diagnostic outputs.
- Patient Monitoring Devices: Remote patient monitoring systems that analyse vital signs and other physiological data using AI algorithms. These devices often connect to hospital networks, creating potential pathways for broader system compromise.
- Surgical and Treatment Platforms: Robotic surgery platforms utilising machine learning and AI-driven drug recommendation systems. The precision required in these applications makes them particularly concerning targets for malicious interference.
Understanding Cybersecurity Threats to AI Medical Devices
The incorporation of AI introduces novel attack vectors that differ fundamentally from traditional cybersecurity threats:

Model Evasion Attacks: Also known as adversarial attacks, these manipulate input data to deceive AI systems into making incorrect predictions. Attackers can introduce subtle alterations to medical images or sensor readings – changes often undetectable to humans-that cause AI models to misinterpret data and generate incorrect diagnoses or treatment recommendations.
Adversarial Examples: These are inputs specifically designed to be misinterpreted by AI systems. In healthcare, such attacks could cause diagnostic systems to misclassify critical conditions, potentially leading to delayed or inappropriate treatment
Regulatory Landscape and Compliance Requirements
The USA and FDA requirements for AI systems
With emerging technologies, part of the developmental burden is to parse through global regulatory compliance requirements. As there are currently no globally harmonised regulations for cybersecurity and AI regulatory requirements, medical device developers will have to stay on top of the evolving regulatory landscape in their region of interest.
In the United States, Section 524B of the Federal Food, Drug and Cosmetic Act (FD&C Act) sets the legal foundation for cybersecurity requirements. The Act now requires medical device developers that intend to submit any premarket submissions for a “cyber device” follow the requirements set out in Section 524B [i]. Developers must show a cybersecurity plan for monitoring and patching, documented secure-by-design processes within the quality management system (QMS), and a Software Bill of Materials (SBOM) within the premarket submission [ii]. FDA has recently issued a final guidance document, Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions (published June 2025), to provide guidance on cybersecurity device design, labelling, and the documentation that FDA recommends be included in premarket submissions for devices with cybersecurity risk. For postmarket considerations, FDA’s 2016 guidance document, Postmarket Management of Cybersecurity in Medical Devices (published December 2016), still remains applicable and provides structured and comprehensive management of postmarket cybersecurity vulnerabilities for marketed and distributed medical devices throughout the product lifecycle [iii].
Although the US does not have any AI-specific regulations, the FDA has been proactively developing guidance documents for medical device developers. In particular, the FDA introduced the concept of a Predetermined Change Control Plan (PCCP) to address FDA’s traditional paradigm of medical device regulation that was not designed for adaptive artificial intelligence and machine learning technologies. As detailed in FDA final guidance document, Final Guidance: Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions (published August 2025), the guidance recommends that a PCCP describe the planned device modifications, the associated methodology to develop, validate, and implement those modifications, and an assessment of the impact of those modifications. FDA will review PCCPs as part of a premarket submission and encourages developers to engage with the FDA via the Q-Submission pathway to discuss any PCCP proposals [iv].
EU regulation and the AI Act
In the European Union, the EU Medical Device Regulations (Regulation (EU) 2017/745) Annex I provides General Safety and Performance Requirements (GSPRs) 17.2 and 23.4 (and EU In Vitro Diagnostics Regulation (Regulation (EU) 2017/746) Annex I, GSPR 16.2 and 20.4) require developers to apply state of the art cybersecurity practices, which is often demonstrated via compliance with standards such as, IEC 81001-5-1, IEC 62304, and ISO 14971 [v] [vi]. The Medical Device Coordination Groups (MDCG) published a guidance document, MDCG 2019-16 rev 1 Guidance on Cybersecurity for Medical Devices, in December 2019 (revised in June 2020) to aid developers in fulfilling the cybersecurity-related GSPRs [vii].
In addition to the EU MDR/IVDR, the EU AI Act establishes cross-sector obligations for high-risk AI, which includes most AI medical devices. The AI Act requires developers to comply with regulatory requirements for data governance, risk management, robustness, transparency, and human oversight of an AI system. The AI Act entered into force August 1, 2024 and its rules will be applied in phases. Broader provisions and any prohibitions have been in force since early 2025, with most of the AI regulations to be applicable on August 2, 2026. Targeted requirements for high-risk AI systems will be in force on August 2, 2027 [viii]. The table below details important milestones for the EU AI Act.
Date | Milestone |
August 1, 2024 | Date of entry into force of the AI Act. |
February 2, 2025 | Prohibitions on certain AI systems and requirements on AI literacy start to apply |
August 2, 2025 | The following rules start to apply:
|
August 2, 2026 | The remainder of the AI Act starts to apply, except Article 6(1) High-Risk AI Systems. |
August 2, 2027 | Article 6(1) and the corresponding obligations in the Regulation start to apply. |
For AI systems and General Purpose AI (GPAI) models that have been already placed on the market or put into service before the AI Act entered into force, transitional provision, as detailed in Article 111, will apply:
- AI systems which are components of large-scale IT systems (Annex X) and that have been placed on the market or put into service before August 2, 2027 need to be compliant with the AI Act by December 31, 2030.
- All other high-risk AI systems that have been placed on the market or put into service before August 2, 2026 need to be compliant with the AI Act once they are subject to significant changes in their design. If the provider or deployer of that high-risk AI system is however a public authority, it needs to be compliant with the AI Act by August 2, 2030.
- GPAI models that have been placed on the market or put into service before August 2, 2025 need to be compliant with the AI Act by August 2, 2027 [ix].
The AI Act details the horizontal obligations for developers that is intended to overlay the EU MDR/IVDR. Medical device developers will have to comply with both sets of regulations if incorporating AI into the medical devices. Developers can demonstrate compliance with the regulations using internationally recognised standards such as ISO 14971 (risk management), IEC 62304 (software lifecycle), IEC 81001-5-1 (secure lifecycle for health software), ISO/IEC 27001 (information security), and ISO/IEC 23894 (AI risk management). It is important to note that currently, CEN/CENELEC is still developing a set of harmonised standards for the AI Act, which will provide official conformity pathways once adopted.
Technical Safeguards and Defence Mechanisms
Input-Level Defences
Protecting AI medical devices requires multiple layers of defence starting with input validation and sanitisation. Input-level defences include data preprocessing techniques to identify and filter adversarial inputs, statistical analysis to detect anomalous data patterns, cryptographic verification of data sources and integrity, and real-time monitoring of input data for suspicious characteristics.
Model-Level Protections
Adversarial Training: Exposing AI models to potential attack scenarios during training improves resilience against real-world threats. This involves training models with adversarial examples to increase robustness and implementing defensive distillation techniques to detect adversarial inputs.
Anomaly Detection Systems: Real-time monitoring systems can identify unusual AI behaviour patterns that may indicate ongoing attacks. These systems analyse prediction confidence levels, decision pathway anomalies, and statistical deviations from normal operation patterns.
Model Ensemble Techniques: Using multiple AI models for critical decisions can provide redundancy and cross-validation that makes systems more resilient to individual model compromise.
Emerging Threats and Future Considerations

Advanced AI Attack Techniques
As AI systems become more sophisticated, attack techniques are evolving correspondingly. Emerging threats include model stealing attacks that reverse-engineer proprietary AI algorithms, poisoning attacks on federated learning systems, and sophisticated adversarial examples that can transfer between different AI models.
Supply Chain Attacks: Attacks on AI development pipelines present growing concerns, including compromise of training data sources, malicious modifications to pre-trained models, and corruption of AI development tools and frameworks. These attacks can be particularly insidious as they occur before deployment and may remain undetected for extended periods.
Final Thoughts
Risk mitigation in medical devices is of utmost importance. If you spend time when building your software you can ensure key safeguards are in place to avoid security risks and attacks, but you also need to ensure you keep up to date with the latest threats and regulations.
Engaging with experts early and often can help you to spot risks and save you time, money and reputational damage.
💬If you’d like to discuss how AI regulation and cybersecurity might apply to your device, we’d be happy to have a friendly, no-obligation conversation about your challenges and opportunities.
🖱️ At Firefinch we love developing the software that helps medical devices stay secure. Get in touch to learn more.
🖱️ Learn more about CS Lifesciences.