Medical Device Quality, Regulatory and Product Development Blog | Greenlight Guru

SaMD Clinical Evaluation: What Do the FDA and IMDRF Say?

Written by Etienne Nichols | September 22, 2025

Any medical device manufacturer must be able to demonstrate the safety, effectiveness, and performance of the products they place on the market, and while that also applies to software as a medical device (SaMD), there are unique challenges that come with the clinical evaluation of SaMD. 

That’s why the International Medical Device Regulators Forum (IMDRF) released a guidance document on the clinical evaluation of SaMD: Software as a Medical Device (SaMD): Clinical Evaluation. The guidance document provides a comprehensive framework for demonstrating the safety, effectiveness, and performance of SaMD products, and its principles have been unanimously confirmed by the IMDRF Management Committee and adopted by the FDA.

While I highly recommend that you read the entire document, in this article, I’ll explain the basic framework for clinical evaluation of SaMD that IMDRF recommends, as well as touch on a few key points to remember.

BONUS RESOURCE: Click here to download our free checklist for premarket submission documentation for software-enabled devices!

The three pillars of SaMD clinical evaluation

The guidance defines clinical evaluation of SaMD as “a set of ongoing activities conducted in the assessment and analysis of a SaMD’s clinical safety, effectiveness and performance as intended by the manufacturer in the SaMD’s definition statement.” 

To help manufacturers understand what’s required, the FDA and IMDRF have established a three-pillar framework for clinical evaluation of SaMD: 

  1. Valid clinical association
  2. Analytical validation
  3. Clinical validation

Each pillar builds upon the last to establish that a SaMD is not only technically sound but also clinically meaningful and effective in real-world use.

Source: Software as a Medical Device (SaMD): Clinical Evaluation

1. Valid Clinical Association

This first pillar establishes the scientific validity of the SaMD's output. It asks: "Is there a valid clinical association between your SaMD output and your SaMD's targeted clinical condition?". 

Answering this question means proving that the software's output—such as a diagnostic measurement or a risk score—is well-founded and has a credible scientific connection to the clinical condition it is intended to address.

Evidence for this can be established in several ways:

  • Existing evidence: This includes peer-reviewed literature, professional society guidelines, and original clinical research.

  • Novel evidence: This could involve generating new data through secondary data analysis or clinical trials.

The guidance distinguishes between a well-established clinical association, which has extensive documentation, and a novel clinical association, which may require more evidence. A novel clinical association may involve new inputs, algorithms, outputs, intended use, or target population. For example, a novel clinical association may use a combination of non-standard inputs like mood or pollen count with standard inputs like blood pressure or gait to detect the early onset of a disease or deterioration of health. 

2. Analytical Validation

Analytical validation is the technical proof that the SaMD works as intended. It confirms the ability of the software to generate its intended output from its input data. This pillar answers the question: "Does your SaMD correctly process input data to generate accurate, reliable, and precise output data?".

This validation process is a crucial part of the software development lifecycle and is generally conducted during the verification and validation (V&V) phase as part of a company's quality management system (QMS). It provides objective evidence that the software was correctly built and that its specifications meet user needs. 

3. Clinical Validation

During the analytical validation of your SaMD, your goal was to prove the device correctly processes input data to generate accurate, reliable, and precise output data. During the clinical validation, you are attempting to prove that the use of that output data achieves your intended purpose in your target population in the context of clinical care

In other words, the SaMD may generate great output data, but you still have to prove that data can make a positive impact on the health of your target population. Can users achieve clinically meaningful outcomes through predictable and reliable use of this SaMD?

You can demonstrate clinical validation by referencing existing data, extrapolating from existing data for a different intended use, or generating new clinical data. This phase can involve measuring a range of metrics, including:

  • Sensitivity (correctly identifying patients with the condition).
  • Specificity (correctly identifying patients without the condition).
  • Positive and Negative Predictive Value (PPV/NPV) (the proportion of positive/negative test results that are correct).
  • Clinical usability (how safely and effectively users can interact with the software).

Clinical validation is an extremely important part of the clinical evaluation process. In fact, a recent analysis of recalls of AI medical devices found that devices without clinical validation were significantly more likely to be recalled. While this shouldn’t be surprising, it should be concerning that devices are making it to market without adequate clinical validation. The IMDRF guidance states clearly that “clinical validation is necessary for any SaMD.”

Real-world evidence and continuous learning

The guidance document is explicit that clinical evaluation should be an iterative and continuous process as part of a manufacturer’s QMS. This means that even after a device has been placed on the market, the manufacturer must continuously collect and analyze real world performance data. 

That may include safety data, results from performance studies, ongoing clinical evidence generation, new research publications, or direct end-user feedback. The graphic below depicts the cycle of data collection, analysis, and potential changes due to new data.

Source: Software as a Medical Device (SaMD): Clinical Evaluation
 

A couple important points to note here: 

  • “Continuous learning” in this instance does not refer to machine learning. SaMD may keep learning after they have been released into the market, but this does not fulfill the “continuous learning” requirements for clinical evaluation, which refer to the active collection of post-market information. 

  • SaMD manufacturers should aim to impose the least burdensome approach possible when collecting real-world evidence. This means the manufacturer should not rely solely on the active involvement of an end-user to collect data. Rather, they should leverage the capability of the SaMD to collect clinical evidence.
Real-world performance data is a necessary part of your clinical evaluation because it allows you to identify and correct problems, support future expansions in functionality, improve the effectiveness of the device, and meet any anticipated user demands. 

BONUS RESOURCE: Click here to download our free checklist for premarket submission documentation for software-enabled devices!

Collect data and iterate faster with a QMS built for SaMD

Clinical evaluation is not a set-it-and-forget-it process. You are required to continuously collect information on your device via a “set of ongoing activities” to ensure the SaMD is meeting users’ needs and performing as it should. That means you need a QMS that’s always up-to-date and ensures nothing falls through the cracks.

With Ultralight by Greenlight Guru, you’ll have a SaMD-focused QMS that makes it simple to collect and manage data from CAPAs, complaints, and nonconformances with built-in workflows that grow with you. And with a fully traceable system, you’ll be able track all your clinical evaluation inputs and ensure that any changes to your SaMD are fully documented. 

Get your free demo of Ultralight today!