Skip to content
Updated: 13 min read

AI in medicine and pharma: how to implement systems in compliance with regulatory requirements

The definitive guide for MedTech and Pharma leaders. Understand the requirements of AI Act and MDR, learn the CE certification process for software and...

Adrian Kwiatkowski Author: Adrian Kwiatkowski

Imagine a MedTech startup. A team of brilliant data scientists and engineers, driven by a vision of saving lives, creates a groundbreaking artificial intelligence algorithm. This model, trained on thousands of medical images, can detect early stages of a rare cancer with unprecedented precision, far surpassing the human eye. The team is convinced that their innovation will forever change the face of diagnostics. Investors are thrilled, and the company prepares with tremendous momentum to enter the market. However, on the day of the planned deployment at the first hospital, the project is abruptly halted. The facility’s lawyers ask a series of questions that no one is ready to answer: “Where is the CE certificate for this software?”, “How did you conduct clinical validation?”, “What evidence do you have that your algorithm is not biased against certain patient groups?”, “Who takes responsibility for a misdiagnosis?”.

This scenario is a cold shower for many innovators in the medical sector. In an industry where human health and life are at stake, technological excellence alone is absolutely insufficient. Every innovation must go through an extremely rigorous regulatory pathway that guarantees its safety, efficacy and reliability. In 2025, in the era of the EU Medical Device Regulation (MDR) and the groundbreaking AI Act, this landscape has become even more complex.

As a leader in a MedTech or Pharma company, your greatest challenge is not building an intelligent algorithm. It is navigating that algorithm through the labyrinth of legal and regulatory requirements.

This guide is a complete and in-depth roadmap through this complicated world. It was written not for lawyers, but for technology and business leaders who need to understand the strategic context and plan the AI implementation process in a way that is not only innovative, but above all safe and compliant with the law.

What key regulations (MDR, AI Act, GDPR) apply when implementing AI in medicine in 2025?

The regulatory landscape for AI in medicine is multi-layered. To understand it, we must look at three key legal acts that form the framework for operations in the European Union.

The first and most important is the Medical Device Regulation (MDR). This is the fundamental regulation that defines what a medical device is and what requirements it must meet to be placed on the EU market. Crucially, this definition also covers software, including AI systems, if their intended purpose is strictly medical.

The second pillar is the AI Act, the EU regulation on artificial intelligence. This is a groundbreaking, horizontal law that classifies AI systems according to their level of risk. Almost all AI systems used in medical diagnostics and therapy are classified under it as high-risk systems, which imposes on their creators a range of additional, rigorous obligations related to data quality, documentation, transparency and human oversight.

The third, omnipresent element is the GDPR (General Data Protection Regulation). Since AI systems in medicine almost always process health data, which is sensitive data, they must meet the strictest requirements for the protection of personal data, including those concerning the legal basis for processing, anonymization and patients’ rights.

Does an AI model used for medical purposes need to have CE certification like a medical device?

This is a key question, and the answer in most cases is: yes. Under the MDR, software is treated as a medical device if its manufacturer has intended a medical purpose for it. This applies to software intended for purposes such as diagnosing, preventing, monitoring, treating or alleviating the course of a disease.

If your AI algorithm analyzes MRI images to detect a tumor, assists a doctor in selecting a drug dosage or predicts the risk of a heart attack based on patient data, it is classified as Software as a Medical Device (SaMD).

As a medical device, such software, depending on its risk class, must undergo a conformity assessment and obtain CE marking before it can be legally placed on the European Union market. CE marking is the manufacturer’s declaration that their product meets all the essential safety and performance requirements set out in law.

What medical data and under what conditions can be legally used to train AI algorithms?

Data is the fuel for artificial intelligence, but in medicine it is highly radioactive fuel that must be handled with the utmost care. Health data is classified under GDPR as a special category of data, and its processing is in principle prohibited unless one of the rigorous conditions is met.

The safest and most ethical basis for training AI models is informed and voluntary patient consent. This consent must be specific, informing the patient about what data will be used, for what purpose and by whom.

Another possibility is the use of data for scientific and research purposes, which is also permitted by GDPR, but requires meeting a number of conditions, including implementing appropriate safeguards.

A key technique that enables the safe use of medical data is anonymization or pseudonymization. Full anonymization, meaning the irreversible removal of all information allowing patient identification, causes the data to no longer fall under GDPR. However, this is a process that is difficult to achieve in practice, especially for imaging data. Pseudonymization, meaning the replacement of identifying data with an artificial identifier, is a frequently used compromise that increases security, but such data still remains personal data.

What are the key stages of clinical validation that an AI algorithm must undergo before entering the market?

Before an AI algorithm can be used in clinical practice, it must undergo a rigorous validation process that proves it is not only technically precise, but also effective and safe in a real medical environment.

This process consists of several stages. The first is analytical validation, which answers the question: “Does the algorithm work correctly from a technical point of view?”. At this stage, its accuracy, precision and sensitivity are verified on test data that was not used in the training process.

The second, much more difficult stage, is clinical validation. It answers the question: “Does the algorithm provide real benefit in clinical practice and is it safe for the patient?”. This requires conducting clinical studies in which the results of the algorithm’s operation are compared with the current diagnostic or therapeutic “gold standard.”

The final element is continuous clinical evaluation (Post-Market Clinical Follow-up), which takes place after the product has been placed on the market and involves continuous collection of data confirming that the device maintains its safety and performance profile over the long term.

The consequences of an error made by an AI algorithm in medicine are far more serious than in any other field.

Clinical risk is obvious and the most serious. A “false negative” error, where the algorithm fails to detect an existing disease, can lead to a delay in treatment and consequently to the patient’s death. Conversely, a “false positive” error, where the algorithm diagnoses a disease that is not present, can lead to unnecessary stress, further, often invasive examinations and unnecessary treatment. Legal risk is equally high. In the event of harm suffered by a patient, a complex question of liability arises. Is the software manufacturer who created the faulty algorithm at fault? Is it the doctor who based their decision on its recommendation? Is it the hospital that implemented an unverified system? New regulations, such as the AI Liability Directive, aim to facilitate the pursuit of claims in such situations.

What unique, interdisciplinary competencies does a team creating AI in the healthcare sector require?

Building a legally compliant and effective AI product in medicine is impossible without creating a team that combines competencies from many, often distant fields.

Of course, the foundation is technical competencies – you need world-class data scientists, machine learning engineers and programmers. However, this is only the beginning. The presence in the team of clinical experts – doctors and scientists who understand the medical context, can assess data quality and help in designing validation studies – is absolutely crucial.

A Regulatory Affairs Specialist is also essential, one who has in-depth knowledge of MDR and AI Act requirements and can guide the company through the entire certification process. Equally important is the role of a Quality Assurance Manager, who is responsible for implementing and maintaining the Quality Management System (QMS) required for medical device manufacturers. Finally, the support of lawyers and data protection officers is essential to ensure GDPR compliance.

Strategic summary: what does the roadmap for a legally compliant AI product deployment in medicine look like?

The table below presents a simplified roadmap illustrating the key stages and challenges in the process of creating medical AI.

Project phaseKey regulatory question?Required actions and documentsGreatest risk
1. Concept and definitionIs our product a medical device? What is its risk class?Defining the intended use. Product classification in accordance with MDR.Incorrect classification leading to the selection of the wrong certification pathway.
2. Data collection and preparationDo we have a legal basis for processing this data? Is the data representative and of high quality?Obtaining patient consent or meeting other GDPR conditions. Anonymization/pseudonymization. “Datasheet for Datasets” documentation.Training the model on biased or unrepresentative data, leading to a discriminatory algorithm.
3. Development and analytical validationHow will we prove that our algorithm works correctly from a technical point of view?Implementing a Quality Management System (QMS). Conducting and documenting software verification and validation tests.Lack of solid documentation required by the notified body.
4. Clinical validation and certificationHow will we prove that our algorithm is safe and effective under clinical conditions?Designing and conducting a clinical study. Preparing a Clinical Evaluation Report (CER). Submitting documentation to the notified body.Failure of the clinical study, blocking market entry.
5. Deployment and monitoringHow will we monitor the performance and safety of our algorithm after placing it on the market?Implementing a Post-Market Surveillance (PMS) system. Collecting data from real-world use.Lack of effective monitoring, which may lead to overlooking model degradation or new risks.

How can EITT help your company build competencies at the intersection of AI and medical regulations?

Success in the MedTech industry requires a unique combination of deep technological knowledge with an equally deep understanding of the complicated world of regulations. At EITT, we understand that building these interdisciplinary competencies is the greatest challenge for companies in this sector.

Our training programs are designed in collaboration with medical device regulatory experts and AI practitioners. We offer specialist workshops for leaders and product managers, where we explain MDR and AI Act requirements in an accessible manner and show how to plan the lifecycle of an AI product in compliance with the law.

For technical teams, we conduct training in Good Software Engineering Practices (GSP) for medical devices, teaching how to create software and documentation that meets the rigorous requirements of auditors. Our goal is to build a bridge within your organization between the world of technology and the world of compliance, which is the key to safely and effectively bringing innovations to the medical market. Summary

Artificial intelligence has unprecedented potential to revolutionize healthcare, make diagnostics more precise and therapies more personalized. However, in this industry, innovation must go hand in hand with the highest responsibility. The path from a brilliant algorithm to a certified medical device that truly helps patients is long, complicated and full of regulatory challenges. Consciously planning this path, building an interdisciplinary team and investing in competencies at the intersection of AI and law is the only way to achieve success in this extremely demanding, yet extremely rewarding field.

If you are facing the challenge of implementing AI in the medical or pharmaceutical sector and want to be certain that your team and processes are fully prepared for regulatory requirements, contact us. Let us talk about how we can support you in this strategic and responsible mission.

Read also

Develop your skills

The topic of this article is related to the training Blockchain in Medicine. Check the program and sign up to develop your competencies under the guidance of EITT experts.

Frequently Asked Questions

Does every AI system used in a hospital require CE certification as a medical device?

Not every AI system in healthcare requires CE certification. The key factor is the intended purpose defined by the manufacturer. If the software is designed for medical purposes such as diagnosis, prevention, or treatment, it qualifies as Software as a Medical Device (SaMD) and must obtain CE marking. However, AI used purely for administrative tasks like scheduling or inventory management does not fall under the MDR.

How long does the CE certification process typically take for AI-based medical software?

The certification timeline varies significantly depending on the risk class of the device, but typically ranges from 12 to 24 months for high-risk AI medical devices. This includes time for building the Quality Management System, conducting clinical validation studies, preparing the required documentation, and undergoing assessment by a notified body. Starting regulatory planning early in the development process is critical to avoid costly delays.

Can anonymized patient data be freely used to train medical AI algorithms?

Truly anonymized data, where all identifying information has been irreversibly removed, falls outside the scope of GDPR and can be used more freely. However, achieving full anonymization of medical data, especially imaging data, is extremely difficult in practice. Most organizations use pseudonymization instead, which still classifies the data as personal data under GDPR, requiring compliance with data protection regulations and appropriate legal basis for processing.

What happens if an AI diagnostic tool makes an error that harms a patient?

An AI diagnostic error creates a complex liability situation involving multiple parties. The software manufacturer may be liable under product liability laws, the treating physician retains clinical responsibility for their decisions, and the healthcare facility may face claims for deploying an inadequately validated system. New EU regulations like the AI Liability Directive aim to clarify these situations and make it easier for patients to seek compensation.

Adrian Kwiatkowski
Adrian Kwiatkowski Opiekun szkolenia

Request a quote

Develop Your Competencies

Check out our training and workshop offerings.

Request Training
Call us +48 22 487 84 90