Medical Device Quality, Regulatory and Product Development Blog | Greenlight Guru

Human Factors vs. Clinical Trials: Why Your MedTech Submission is Stalling

Written by Etienne Nichols | March 9, 2026

In this episode, Etienne Nichols sits down with Staci Miller, a Human Factors and UX Strategist at GenUX, to demystify the role of human factors (HF) in the medical device regulatory pathway. Staci explains that many companies mistakenly treat HF as a "box-checking" exercise late in development, leading to costly submission delays or rejections when the FDA finds the documentation fails to tell a cohesive safety story.

The conversation dives deep into the technical distinctions between a Use-Related Risk Analysis (URRA) and a User Failure Mode and Effects Analysis (uFMEA). Staci provides a framework for deciding which approach fits your product, emphasizing that while large conglomerates with post-market data may lean toward uFMEAs, startups and those with novel devices should prioritize the URRA to effectively map out user interactions without the crutch of existing market data.

Finally, Staci addresses one of the most persistent myths in the industry: the idea that clinical trial data can replace human factors validation. She clarifies that while the two can overlap in specific, premeditated circumstances (such as complex implants like aortic valves), they serve entirely different masters—one focused on clinical efficacy and the other on the safety of the user interface across diverse environments.

Watch the Video:

Listen now:

Love this episode? Leave a review on iTunes!

Have suggestions or topics you’d like to hear about? Email us at podcast@greenlight.guru.

Key Timestamps

  • 04:12 – The common disconnect: Integrating Human Factors into ISO 14971 risk management.
  • 06:45 – URRA vs. uFMEA: How to choose based on your post-market data and predicate device status.
  • 10:30 – The "Definition of Done": Tracking the lifecycle of HF documentation from phase zero to market release.
  • 13:15 – System errors vs. Use errors: How to identify root causes during summative studies.
  • 18:50 – The "Clinical Trial Myth": Why efficacy data is not the same as usability validation.
  • 22:10 – Design Inputs vs. Design Outputs: The "Blueprint and the House" analogy for FDA submissions.
  • 25:40 – The impact of the "Use Environment": Testing for movement in ambulances and lighting in radiology suites.

Top takeaways from this episode

  • Premeditation is Key: If you intend to use clinical trial data for HF validation, it must be planned in the protocol from the start; you cannot retroactively claim clinical data satisfies usability requirements.
  • Map User Groups Early: Distinguish clearly between primary and secondary users. Bloating user sets without explaining how or why they engage with the device complicates your risk profile.
  • Environment Matters: Documentation must account for the physical "10,000-foot view," including noise, lighting, and motion (e.g., an ambulance), as these are often where critical use errors occur.
  • HF is Risk Management: Human factors should not live in a silo. It must align with the scales of harm (negligible to catastrophic) defined in ISO 14971 and work in tandem with Quality and Regulatory teams.

References:

  • ISO 14971: The global standard for the application of risk management to medical devices.
  • FDA Human Factors Guidance: The primary document outlining expectations for usability testing and documentation.
  • Etienne Nichols: LinkedIn Profile

MedTech 101: URRA vs. uFMEA 

Think of a uFMEA (User Failure Mode and Effects Analysis) like a car manufacturer looking at an old model to see why the brakes failed in the past—it relies on known data to fix specific parts.

A URRA (Use-Related Risk Analysis) is like teaching someone to drive a brand-new type of vehicle (like a spaceship) for the first time. Since you don't have "crash data" yet, you have to carefully map out every single step the pilot takes and imagine every possible way they could push the wrong button in the heat of the moment.

Memorable quotes from this episode

"The FDA doesn't put things out there just to have a good time... If they've made human factors a requirement and you're treating it as a 'suggestion,' you're giving yourself enough rope to hang yourself." - Staci Miller

"People are obsessed with the product themselves—the design outputs. But the FDA wants to see the design inputs. They want to see the blueprints of how you built that house, not just the wallpaper." - Staci Miller

Feedback Call-to-Action

We want to hear from you! Do you have questions about your specific regulatory pathway or a topic you’d like us to cover? We provide personalized responses to every listener who reaches out. Send your thoughts, reviews, or suggestions to podcast@greenlight.guru.

Sponsors

Greenlight Guru: This episode is brought to you by Greenlight Guru, the only quality management platform designed specifically for the medical device industry. Whether you need to manage your QMS to stay compliant with ISO 14971 or streamline your clinical data through their EDC solutions, Greenlight Guru helps you move faster with less risk.

 

Transcript

Etienne Nichols: Hey, everybody. Welcome back to the Global Medical Device Podcast. My name is Etienne Nichols, and today I want to talk a little bit about human factors. If you've ever watched a medical device submission stall out or gig, maybe get rejected, there's a good chance human factors had something to do with it.

Not because the product didn't work, but because maybe the HF documentation didn't tell the right story, wasn't. Maybe it wasn't structured correctly, maybe it wasn't part of the process at all.

And so today we're going to get into one of those, one of the most consequential, maybe one of the most misunderstood parts of the regulatory pathway where human factors actually lives, what the FDA expects and some of the mistakes that companies make and keep seeming to make until. Until it's too late.

So, for teams that think clinical data is enough to submissions missing a URRA, and maybe I'm getting ahead of myself. If it's a uFMEA or a URRA, we'll talk about that.

If you're missing that, though, completely, we're going to cover some of the real landscape of human factors and regulatory and what it actually takes to get to market. And so here to walk us through all of that is Staci Miller from Gen UX.

Staci is a human factors and UX strategist with Gen UX, where she works at the intersection of user research, regulatory compliance, medical device design. She has deep expertise in FDA human factors requirements, and Staci has helped many companies navigate and figure out one of the most misunderstood parts of this regulatory pathway.

So, Staci, glad to have you with us. I know there are holes that you could fill with the introduction, but how are you doing today?

Staci Miller: I think that was a great Introduction and I'm doing great, thanks for asking. How are you?

Etienne Nichols: I'm doing good. I. It's, it's. I'm not even sure where I am half the time because it's just like one thing to the next. But I'm excited to have this conversation with you.

I was sorry we missed each other at MD&M West but hopefully one of, one of these days our paths will cross in real life. One of these days. But to kind of get into this conversation about human factors, maybe we could start with the big picture.

Where does human factors go actually fit in the regulatory pathway? And why is it so commonly misunderstood?

Staci Miller: That is a great question.

That's probably the most important question.

So, there's a very important way which is in risk management.

And risk management is done throughout the whole process of a product design on many different levels.

And human factors focuses solely on the user interactions for the device.

So, a lot of the time you can have your risk documentation for your clinical trial or for hardware and software and you know, systems engineering.

But throughout time outside of, you know, working for a big conglomerate like Medtronic where I used to work, or Abbott where I used to work, a lot of people don't realize that the FDA guidance that points to 14971 is doing so. So, you structure your risk analysis on, based on 14971's structure, but for the user.

So, I think there's a big disconnect there.

And another thing that I see people not do is identify their user groups. So, in a clinical trial you already know who your user groups are, right? There's the primary user and then secondary users, which are support staff to the primary user.

A lot of the times people create documents that say secondary users are also primary users, which creates very large user sets, and they don't explain why or when or why how people are engaging with the product.

All those things go into the URRA use related risk assessment.

And what's interesting is lately I've seen a big shift from the uFMEA approach, which is more of a system level perspective, prescribed approach, to a URRA approach.

And I think there's a lot of reasons for that depending on your like pathway to market.

So, if you're doing a predicate, a uFMEA might be a better idea if you actually have internal data from a very large conglomerate. Let's, because I worked at Medtronic, I'm going to talk about Medtronic in this way because everybody knows they make pacemakers and defibrillators.

So, let's say they make a new test defibrillator that's similar to one that they already have on market.

Well, they actually have data, post market data to reduce risk from their risks. They know what that is, they have that information.

So, a uFMEA that has post mitigated risk after market information has been brought into the, the structure of the document and you can do the analysis on it.

That makes sense when you are creating a product that is brand new but it's, you're leveraging a similar product online.

Well, you don't have any post market data.

None. Right. So, you don't really have any documented evidence that the post risk to the, to the product would mitigate your risk profile and lower it. So, you should stick with the URRA because you don't have post market data information really.

So, these are nuances that have very much to do with the regulatory pathway and where I see a lot of people getting tripped up with their submissions, especially because a lot of people use the predicate device argument when they're going to market.

Etienne Nichols: So, let's talk about the regulatory pathways and how, how important is or how I, I, I'm trying to figure out how to articulate this exactly. To what level does someone need to perform the use when it comes to this pathway versus that pathway.

Obviously having some kind of user information is important to, to mitigate those risks of use related risks and so on. But how, how would you say when, what's the definition of done and how people, how can people figure that out?

Staci Miller: What, what's the definition of done? Or is that what you asked?

Etienne Nichols: Well, I know that's getting into a different question, but I think that's one of the things that's really difficult about this is how do I know when I'm done with this?

What, to what depth do I need to go?

Staci Miller: Oh yeah, yeah, I get what you're saying. Yeah, I get what you're saying. Sorry, it took me a second. So how do I know when I'm done with the process of creating the URRA?

So that's a great question. And here's how it starts.

You start with your uses and use environments, which is your use specification, and you document those a lot of the times it's already created in the clinical trial. You just have to create it within the human factor scope of things and make sure that all that information matches.

Then from there you know what users are interacting with your system and you should at that point during clinical trial or even before know what the user is, how the User is interacting with your system and what you want to do is you want to start to outline those tasks.

So, I view a URRA and a uFMEA as a task analysis. Some people do them separately.

No judgment. It's just a different way to skin a cat. I do them together.

It saves time. To me it's more efficient.

That's just the way I'm. That's just the way I roll. That doesn't mean it's wrong to do it a different way. So, I want to make that very clear because other people approach this very differently.

So, as you get your tasks in order, you need to know what environment those tasks are being orchestrated in, and you need to know who is doing those tasks.

So that's where the first step comes from. And you should be developing that in your phase one, two and three. Now I'm breaking it down on assumptions of a phase five go to market plan zero is inception, one to two is development and prototyping.

Phase three might be late prototyping. Phase four is summative phase summative and validation testing and phase five is market release. Let's just assume those things.

So, from phase zero to phase four is when you should be generating your URRA and your uFMEA. This is a live document, and it changes once you know you're going into design freeze and you've run testing, not just human factors testing, not just, you know, formatives, aka user research.

I'm talking software testing; I'm talking hardware component testing. And you know your product is at the equivalence of market release stage.

You should be done with your uFMEA or era by that time or think that you're done right and from that you're building your summative study because you're testing the critical tasks that have been identified in your uFMEA or era to show efficacy that it is safe for its intended users uses and use environments documented by the FDA guidance.

That's what the purpose of URRA is. It's to show that you've considered the user risk.

You know what environments the user is could potentially commit these risks in and you know which user is using the system that could commit the potential error and when and why.

And then you identify those critical tasks, and you mitigate those risks during formatives and you reduce the risk to as low as possible. And then you test it to make sure in a simulated environment those risks have been mitigated too as low as possible.

And then you root cause anything out.

And so, by the time you hit phase four when you're going into summative studies or validation for any of the other components of the system, your URRA should pretty much be final.

The only reason it would be changed or edited is if you go and start your summative validation and some component of it fails. But by that point, you shouldn't be failing anything.

You should know that. You should see what's ahead of yourselves. Right?

And if that fails, then you have to go back and figure out the root cause of the failure, address it, and document if it was a system problem or if it was a user problem.

Because both can be true. And sometimes one thing can be true and sometimes it isn't the user.

Like if the system doesn't work appropriately, that's not a use error.

Right. If the system that you're training say it's a huge, huge system, like a robotic system, and it keeps turning on and off and on and off and on and off and on and off during a summative.

Well, that's going to really throw the user.

You can't blame the user for the system turning on and off when it's not supposed to.

And they may make errors. So, you would stop probably at that point reassess whether the root cause was actually a use interaction that caused the system to turn off. And if it is, you know, you have some work to do updating those documents and getting back to validation,

or you realize it was like a system error and then you, you have, you stop and you're like, okay, well what do you guys want to do?

This is on. You gotta figure it out and then you move forward as a team.

So, these are dynamic documents, just as any document would be.

But if your summative goes well, you don't have any weird issues or hiccups, you are validating the critical tasks are reduced to as safe as they can be, and you submit to the FDA and you're good to go.

And then ironically, post market does affect your URRAs later.

Okay, that's a whole other guidance thing. We're just talking about getting to market right now.

Etienne Nichols: Yeah, no, that makes sense. And I can, I can see that with, you know, maybe this is another question too, that I think that I think it's worth talking about.

When you talk about risk management in general for a medical device. I feel like people look at human factors. It's, it's its own thing, it's its own department. It's. They got to do their thing so that we can, this has been my experience at times.

Staci Miller: Check that box. So, we can check this box, like, over here.

Etienne Nichols: But in reality, it should be a part of risk management. Is that accurate? That's the way I look at it. And I. Yeah, I'm curious.

Staci Miller: It is a part of risk management because I work so closely with regulatory and design quality assurance in most of the positions I've ever held. Actually, they've been my best friends in my positions, because we're looking at user risk.

We're looking at the things that you don't think most people who design products. Let's just talk about how intelligent these people are.

They have master's degrees, they have PhDs, they have years of experience doing whatever type of engineering they've been trained to do. Right.

That's not like the rest of the people in the world.

Right. That's a very small population, if you really think about it. So, the way that people build products needs to be for a much larger population. Right. You have to consider who the user is and build products for said users.

Right.

So, when you think about that, my regulatory teams always notice little glitches like, okay, how can we fix this glitch? And they're like, throw human factors at it. Because I'm going to come on, I'm going to come in, I'm going to test it.

I'm going to be like, how is this supposed to work?

Well, the person is supposed to interact with it this way, and it's supposed to give you this information when you do X, Y and Z. And then the user's supposed to say yes or no.

And I'm like, what's, What's a failure? What's not a failure? If they say no, it's a failure.

So, I would take stuff from engineers who had never tested anything, put it in front of their actual users, which they had never talked to test the product, and guess what would happen?

It failed.

It's not gonna work.

Why is it not gonna work? Well, usually I get yelled at because I'm the person delivering the news.

Etienne Nichols: Right. But it'd be better users.

Staci Miller: Yeah. And, like, find smarter people. They know how to do. And that's not the issue. The issue is for the conditions that you've created and the user group that we're talking to, they don't understand this, so let's fix it.

And that's where the design aspect comes in. And that's, again, the design also breaks down the risk.

Because if you have mitigated designs so your users can't do the wrong thing, you've mitigated your own risk.

Now I'm going to share a little something about me.

I have a very large background in design. I went to FIDM in a totally different life and I was a clothing designer for a long time.

And I understand design principles and design fundamentals. Those don't change across somebody wanting to buy a piece of clothing to someone interacting with a system. Those are all the same responses.

I just didn't know that till I went to school and studied it. So, when it comes to design and mitigating risk, those things are hand in hand in a lot of different capabilities and capacities. It's like, for example, pacemakers.

You have to program a pacemaker to make it work inside someone's body.

The doctor has to be able to program it and he's not going into the person's body and programming it. It's a third-party system that programs the pacemaker.

Right. So, the doctor has to be able to do that. If the doctor can't program a pacemaker to correctly pace your heart, well, it's a problem. Right? Yeah. So. And I've seen and you can see stuff go wrong whenever there's programming or two things that are connecting and large systems and things of that nature.

So, the other, the other reason I get so close to regulatory and you asked about risk and risk management documents.

So, there's the scale of 1 to 5 in risk management documents and I'm going to throw them out, but I might not have all in the language. So, 1 is negligible, 2 is minor, 3 is major.

No, I think it's major.

Four is harm and five is catastrophic or something. So, there's like a very specific scale that is laid out in 14971.

There's also language about what that does and why there's the harms at those levels. Again, very specific language laid out consistently everywhere.

Those risk documents don't come from my team, they don't come from hf. They come from quality design assurance and regulatory.

That's where that information comes from.

And it's my requirement in a URRA to make sure to comply to ISO 14971 as well. And so, I think when people read the FDA guidance for human factors and medical device manufacturing, they don't realize that they're giving you enough rope to hang yourself.

But is, but you still have to do it, and you have to do it in compliance with other teams with other requirement documents. Like I don't sit there and read 14971. That's not what I do.

Obviously, I do different things, but I know that my documents need to be in line with other risk management processes which are laid out in 14971.

And I, I think that that's where people get hung up. And I, I also think they get hung up at the use case level when you're in clinical trials.

That's where I think people get hung up to.

Etienne Nichols: Okay.

Yeah. And that's actually something I wanted to ask you about.

Where people get hung. Hung up. What. Maybe not just where, but what gets them hung up. What are.

Yeah, because I would imagine even large experience companies get this wrong. What are the most common misconceptions that you felt like you run into? I,

I'm just curious.

Staci Miller: So, there's two. And that's a great question. So lately I've been talking a lot about design inputs versus design outputs and clinical trials and clinical trial data. So those two things have been really hot topics.

I spoke at Project MedTech last November and I spoke with; I was on a panel. I don't remember what the topic was, but I do remember that we were talking about requirements to get to market in certain capacities and like we were talking about data and things of that nature.

And I had just explained that human factors is not the same thing as a clinical trial data.

And I had three vendors come into the room, talk to me from the room. So, they were already there. So, they were just interested in talking to me after the panel.

And we were sitting down talking to them and they're like, well, we don't need human factors work because we already had our clinical trial.

Etienne Nichols: Yeah.

Staci Miller: And I just literally said that that doesn't mean that you have your human factors work.

And I was sitting next to a colleague of mine at the time and even she was like, we just, what, what's going on here? So, there's like this, there is user information that you're gathering in a clinical trial, but that's not what a clinical trial is doing. A clinical trial is providing a power analysis to show that it is efficient and safe to use in the, not in the, not in the simulated use, but like it's showing efficacy of a product within the place it's supposed to be used in.

It's not showing that the device is safe for its intended users, uses and use environments.

And in pilots or clinical trials, you're taking very different data points. You cannot deviate from a clinical trial protocol, or you have to write protocol deviations.

You're working towards the equation of power. So, there's a certain amount of data that has to be taken to support efficacy that the FDA requires.

And it's just a very different structured type of research.

It's very different than what Usability research is quantitative, it's more quantitative.

Usability is mixed methods. It's just different.

So, you can't take unless it cannot be simulated.

You have to plan for that. So, let's say this is a brand new transplant or a new implant or some kind of brand new thing that people don't see on the market yet.

Here's a good example.

Maybe it's some kind of aortic valve replacement that's never been on the market before.

It's very impossible to simulate in an environment an aortic valve replacement, unless you're in a cadaver lab or you're like a wet lab or you're in like an animal lab, which is very costly for just a summative study.

So, in those cases, and it is stated in the FDA's guidance for human factors, you can use clinical trial data, but it's not something that you use after the fact. This has to be part of your plan going forward.

So, this fits into the regulatory pathway. If you are going to use your clinical trial data for your human factors validation study, you have to make that claim as you're putting together the clinical trial. You can't make it afterwards and be like, oh, just kidding. This is where we use this. Just.

No, no biggie. In certain cases, I think you can kind of argue it, but when you're doing a whole implant to repair something in the body, and it's very complicated, you're going to have to use the clinical trial. If you're doing something that's like, lightweight, but you have to insert something in your body, and there's no physical models to do that you can leverage. But these are like, nuanced things that you have to know.

Okay, so. And then the second thing I've noticed is design outputs.

People are obsessed with the product themselves.

Like, they're not. It's like, think of it as building a house.

To build a house, you have to have a foundation, and you have to have blueprints, and you have to have a plan, right?

That is a design input.

The output is the gorgeous house that you made, right. And then the wallpaper and then the color of the carpeting or the. Or the wood on the floors. Right. Those are design outputs.

People get so obsessed with, like the output that they don't remember that they have all these other inputs to explain to the FDA, here's how I built that house.

Etienne Nichols: Yeah.

Staci Miller: And that's where human factors really gets tripped up. Because again, first they think they can use clinical trial data, which only in certain circumstances can you, and only when it's been premeditated. Like you have to think about it.

And then third, if it's a predicate device, people have also been assuming that, well, if it's a predicate, I don't have to do all this other work because the predicate has already done the work.

That is incorrect. And the FDA will, they won't laugh, but they're not going to be super nice.

And then four, if you don't do any of those things, you're not following the requirements to get to market, which the human factors guidance is.

And so, I also think that because the thing the human factors guidance says, guidance instead of requirement.

Well, the FDA doesn't put things out there just to have a good time. They're like, hey, let's just write this document, it's going to be fun.

Why not?

Like if they're putting it out there and they've made human factors a requirement as part of the process and you're just like, well, it's a guidance, I don't really have to follow that.

It's like suggestion.

That's not what that means.

And I think that that's like the thing that gets people tripped up, especially young founders.

And I don't mean young founders age wise, I mean just like young in the, their careers doing this type of thing.

Etienne Nichols: Lack of experience. Yeah, I really like the point you made about the premeditated part of the. If, if you want to use the data from your clinical trial for human factors, it makes total sense that it needs to be premeditated because you need to put that lens on it. Makes me think of that quote, the lack of absence is not absence of you know, or, or what it put on. I'm missing that quote up. But the absence of evidence is not the evidence of absence, for example. So, you may say, well, we didn't have any issues.

Well, that's not proof that you won't have any issues.

And so yeah, I think that's a really good point.

Also, one of the things you mentioned about was documentation and writing down some of those things. So, can you talk about some of the use related risks and why failing to document the real environments users are working in is such a problem?

Can you talk about the environment?

Staci Miller: Yeah, that's a great question. So, the FDA sets the alpha for human factors validation testing to support the hypothesis that the system is safe for its intended Users uses and use environments, which means you need to consider the environment of use. Like say it's an ambulance or say it's a radiation radiology machine. Not radiation, let's not go there. But a radiology machine.

Like you have to read those machines in darker rooms in an ambulance.

If you're using stuff in an ambulance or a mobile, something like that thing is moving. Right. You have to consider.

Yeah. What those. You have to test in those environments. You have to consider those environments.

And that's not always done that way in a clinical trial.

It might be. Actually, let me rephrase that. It can be done in a clinical trial.

But again, if your documentation doesn't write down the 10,000-foot view of what the user is doing in those specific environments, then you haven't shown the FDA that you've, you've effectively considered all these risks in all these different environments with the user interactions and that's what they need. That's all that, that's all that they're really asking for.

It's like, please give me a document that tells me that you've considered all these risks. You've shown me where there's critical tasks and that you've thought through the mitigations and that you've tested for those mitigations to burn down risk.

Just show me that. That's, that's all they're asking for. Like, they're really not being mean or anything like that.

And when you don't do it. I've seen, I've seen some responses that are really intense, like really intense. And I, I was like, I feel bad for the receiving end of this, but they didn't do.

It's like when you're in college and you're in grad school, you don't turn in half the assignment and expect an A.

Yeah, true.

Right. Like, so people are literally going to submission and giving this to the FDA, and it takes a long time for them to review, and they didn't do the assignment.

Etienne Nichols: So, a different. If we flip that around.

You mentioned URRA and uFMEA and I really like the explanation you gave. Like if you have in that internal data, maybe a uFMEA makes sense.

Bottom-up approach. I really like your explanation. That's, that's a great illustration.

Can you talk touch a little bit more on that from the perspective of the regulators, the URRA versus uFMEA and can one or both be done in such a way that it actually hurts the submission?

Staci Miller: Yes, and yes.

So, in a URRA and a uFMEA you want kind of a 10,000-foot view.

And I'll explain it this way. This is the best way to explain it.

If I want you to tie your shoe, I don't really care how you do it. I just want you to tie your shoe. So successful completion would be Eddie and ties your shoes.

It's not.

You had to do the loop, loop and swoop method. And if you didn't do that method, then you failed about. Then you failed tying your shoes. You still tied your shoes.

Etienne Nichols: Yeah.

Staci Miller: So, I have seen URRAs and uFMEAs written so prescribed that if the user doesn't do the exact steps that's written in risk, they fail the validation.

So, you have to consider how you're writing the URRA. I'm not saying there's not critical tasks, because there are, but there's not.

You can't write 16 different ways to do something one way. Like so successful completion of tying your shoes is tying your shoes.

What happens if you don't tie your shoes? Let's talk about those risks.

Well, nothing.

You just didn't even notice and you're lucky and you didn't trip.

Or you can trip and maybe you trip. Yeah.

Etienne Nichols: Oh man.

You're getting into user validation too, which I think it's great because the definition of what is tying a shoe, does that mean the shoelaces are not hanging down? Does it mean the shoes stay together for a walking a mile or whatever it is?

Yeah, I think that's a good point.

Staci Miller: Like the laces just stay together, they don't come apart. You know, however that, whatever that looks like, you cannot. You can triple knot. I don't care what you do. You tied your shoes.

Right?

So.

And if you don't tie your shoes, what are the. What are the failures of that? You can like trip, you can fall. Nothing can happen.

In a worst-case scenario, you fall and break your hip. Right? That's probably the worst-case scenario for your fall, hurt your knee. Maybe you break your knee. Like break your kneecap.

Like there's. Those would be critical tasks. So, tying your shoes is technically a critical task that we all do after we're like three years old or four years old. I don't know.

I don't actually. It's probably not.

Etienne Nichols: Yeah.

Staci Miller: In first grade is when you learn to tie your shoes. But I don't, I think I was like what, seven? I don't have kids.

Etienne Nichols: Well, we're able to push it off further now because we have such good Velcro shoes. So, I'm not even sure right now. We're just like, I don't want to teach my kids.

I have kids at that age, and we should be teaching them right now. But anyway.

Staci Miller: Yes, but I mean that's the exact point, right? So, like you don't want your kids to not tie their shoes appropriately and fall over.

Etienne Nichols: Right.

Staci Miller: Or just lose their…I could just imagine one kid just like come home with no shoes being like, I don't know where they went.

I'm a kid, like what happened to my shoes?

Etienne Nichols: So, I think that's a great illustration though, because it's good, you know, you have validation of different,

different user needs as well. So that's. Yeah, yeah, that makes sense.

Staci Miller: So, there's that, that's the first point.

Second point.

If you have a URRA for a predicate device and you already have, you actually have a predicate device and you have post market data,

you do need to update your URRA if something keeps continuously coming through. So, so the FDA has the MOD database large. Any company who has a product on market is required to record post market data.

Any complaints.

If you see common complaints over and over and over again, the FDA can call you and have you do another validation.

Etienne Nichols: Yeah, that's.

Staci Miller: They, they can pull it or they can pull the product. And I've, I haven't seen that happen ever. But like it's, it's possible.

Additionally, you need, you can also readjust your risk.

So, if in 10 years, 11 years of a product being on market, you had a critical task that you never see. Well, is it critical?

We made it critical when we first shipped the product.

Maybe it's minor, maybe it's, it might be a step down to below that.

Maybe you've edited the product so sorry. Mitigated the design and the product so well that that thing never happens.

Well, you can write that in your post in your uFMEA and reduce risk.

Etienne Nichols: Yeah, that would get stuck to being very beneficial.

Staci Miller: Yeah.

It shows that over time those things change, right?

Yeah, because it's Data. You have 20 years of data. Like when I worked at Medtronic, I mean I was looking at stuff from 1996 and I was like, this is so old.

And I was like, oh my God, that's when I graduated high school. So, I was like, oh, I'm so old.

But they had like documents dating back that because they had built these products, some of these products that many years ago and then iterated and iterated and iterated and those initial risks that were thought to possibly happen well over time Those techniques and the products change and they get smaller and they go into different locations and doctors then know how to do these things because they do them over and over and over again and practice makes perfect.

So, the risk becomes lower with some of the, some of the interactions that you have with the system overall. And you can, you can show that with data. Yeah, so that's where like a uFMEA which has post mitigated risk. So, here's the high part, we think this can happen. Here's the mitigations and after these mitigations and we have post market data, here's the actual new risk.

Right. But you always test to the pre mitigated risk by the way. Always.

But you're showing like through efficacy, like maybe we. The pre mitigated risk is lower.

So, you could tech technically lower. It can also go in the other direction and go higher.

Or another thing that can happen is that you find risks that you didn't identify.

Etienne Nichols: Something new.

Staci Miller: Yeah, something new. And that that happens too. And so, then you add those to your URRA or uFMEA.

Now I really, to be perfectly honest, I'm seeing a trend with Euros more than I am with uFMEAs right now in the marketplace. And there's, it always switches back and forth. Like 10 years ago it was uFMEA, like uFMEA, we have to do uFMEA.

And then that had the, you know, the reduced risk severity levels and mitigations. And this is our proposal. After risk is mitigated that what these severity levels are actually going to be.

And those were arguments being made.

Now I've seen this flip like not using that, but these big corporations, because they have all that information, they use it. So, like the Abbotts and the BDS and the Olympus and all those big companies, they, they kind of just monitor this.

And that comes from risk.

Yeah, comes from the risk people. It comes from post market surveillance, it comes from regulatory and it comes from design quality assurance that these decisions get made. I don't make them. I edit my documents to be compliant.

Etienne Nichols: I can see people may be coming out of those organizations having used those if they don't understand the reason why they were using UFMA versus URRA.

And I know that's not what this podcast is necessarily about, but I can see them coming out and saying, oh, we've got to do this if they join a smaller company and maybe that, you know, maybe that UFBA is really just trying to fit a square peg in a round hole when you're trying to, you Know, get something brand new on the market.

But that explanation of why you might use one versus the others, I think it's really helpful versus just is it a trend? Everybody's. Because that's the way it felt like to me, everybody just started using URRA and I wasn't sure why.

Staci Miller: It just became more of a thing. And the FT required something new,

which was a line which is not stated in any of the guidance, by the way. This is not. And so, it's super confusing. So, people are struggling to do it.

But it's a, it's a line that says like how you're going to test something, and most people think it's like, oh, I'm going to do a knowledge probe or oh, I'm going to do a task based task.

That's not what that's there for.

What that's there for is if you tested in a different capacity, they want the document number and why it was tested in this other capacity. And then they want to know what's being tested in the HF validation.

So, for example, if you were able to do some of it and take some data from the clinical trial that cannot be simulated, you can use that as a reference point and be like, we cannot do this in simulation.

Here's the argument for why we're using this data as the data point for that.

And those were successful with this amount of participants in the X trial, I would just use that information instead.

So, in a way I feel like the FDA is trying to make it a little bit easier for simulated uses to leverage some clinical trial data when it's not possible to do it in a simulator.

Like we, in a simulated environment, like to use a lot of simulated parts and things like that, it could get very expensive as well to, to pay for. So, like synthetic bones, like they're 40, $50 a pop.

If you're just using like maybe let's just pretend it's a leg. We'll just talk about a synthetic leg with a patella.

Etienne Nichols: Yeah.

Staci Miller: And you need to test some kind of, you know, orthopedic device. Right. With that leg.

Well, I mean you have training sometimes and then you've got the comeback session. So, it's 40, it's like a hundred dollars per model together for a summative.

Plus, you do 15. So, there's $1,500. Right. That doesn't include any of the formatives and any of those things. So sometimes things can get a little pricey for the person who's doing the study.

So maybe in some cases. If you can leverage pieces of data that are actually clinical and put that information in there, that's what that line is for.

Additionally, human factors engineers are not clinicians.

We don't make,

we don't judge clinical decisions. That's subjective. And that's what the clinical trial also does.

Right? Because they're getting data outputs from people making very specific decisions based on the actual thing that's happening in front of them. So, it's an actual person in a clinical trial getting an actual X thing that they're doing.

You have to make a lot of different decisions when a person in front of you is on that operating table. Let's just say they're on an operating table; you cut to a mannequin.

Very different.

It's just different. That's why it's simulated use. So, whenever you can leverage something. I think the FDA is trying to be a little more lenient with some of those clinical choices that are also critical tasks, and I think that's why they're doing that.

But I'm also on the side of, like, the FDA is not intentionally making things difficult, even though some of their guidances are hard to read.

Etienne Nichols: So, I have two more questions just to kind of keep us on track, I guess, from time standpoint.

The first one is.

Yeah, I have a note here that says three main design points. Can you. Can you walk us through what the three main design puts are, why they matter? For a solid HF foundation, that's great.

Staci Miller: So, I only think you need three different things.

Again, my perspective, other people do it differently, but I'm all about efficiencies and I work with a lot of startups, so I think about cash flow. And even when I worked at Medtronic or Abbott, like, you think about budget all the time because every project has a budget, so you have to be budget conscious.

Right?

So, the three things that you need to have for human factors design inputs to get you to your regulatory submission, your use specification, your URRA, uFMEAs, depending on which one you're using, and your summative validation report.

Now, in some cases, the validation report can hold just the data from the validation.

Some people also call this report the usability report, which includes the validation summary and everything that you've done to bring the product to market.

I consider the summative validation report, the whole shebang, everything that you've done outlined from start to finish, a review of the MOD database, the results of the summative study, the results of the formative studies, and any other internal studies. That you've done all those outputs and that's part of the submission.

Yeah, so I. Those three things are the major design inputs that you need for HF to have a successful submission with the FDA.

Etienne Nichols: Okay, that makes sense. I love. I wrote that down. I think it's just important to.

Whenever someone says you need these three things, I think that's really important. So the use specification, the RA, and the summative validation report.

Staci Miller: It sounds simple, right? Sounds super simple.

Etienne Nichols: Oh, I know there's a whole lot of work behind, behind that. But when you think about the outputs, what are we working towards? You know, sometimes we get lost in the weeds or we forget to see the forest for the trees.

So, I think that's helpful.

What. Okay, last question. And can a device realistically get to market without proper HF support? And what happens to teams that try without HF support?

Staci Miller: I'm going to say let's talk about proper. The answer is no; you're not going to get to market without a human factors someone.

Whether you're talking to some consultant like me, that's like just giving you advice, writing, and you're writing your documents.

What happens?

The FDA tends not to be super nice because again, your professor wouldn't be either if you, if you handed in an incomplete assignment. Right?

Etienne Nichols: Sure.

Staci Miller: Who wants the incomplete assignment? And don't forget, they're reading everything.

So, you're giving them something that's not finished and then they have to tell you what to do when they know it's gonna be an argument after they already told you what to do.

So, like, I've seen documents thrown out, I've seen summative studies thrown out. I've seen super aggressive language. Like I read one recently and I was like, oh, geez, that was, that was a little intense even for me.

Etienne Nichols: Wow.

Staci Miller: Yeah, even, even for me. Like, I knew what I was reading as I was reading it and I was like, this person, whoever reviewed this really, really understands these processes and is kind of annoyed that they weren't followed. Like it was obvious.

So, you won't make it to market if you don't do your due diligence. And just because it's a predicate device and they already did the work doesn't mean that you don't have to.

That's not how that works. I'm not sure that's a…that's a very large growing trend. I've seen it across a lot of startups and I'm not sure.

And these are people that don't know each other. So, there's something about something they're reading that they're. That's being interpreted in a very specific way. So, it's not anything that they.

Etienne Nichols: Because they're a predicate, they don't feel like they need HS or human factors.

Staci Miller: Yeah, that's. And that's really been going on. And I'm, like, so confused by where that's even coming from.

And so, some of the times that I'm engaging with person I start with. They're starting from that perspective. And I'm like, I have to turn an absolute. You're crazy person, Staci into, oh, we have to do this. I didn't know that.

And those conversations are terrible, but they're. They're also kind of fun because you have to think through them.

And then.

Yeah. So, without those. Just those three simple documents. So simple.

So simple.

Or you're not going to make it. They're going to tell you to. They're going to turn you right around and be like, you don't have these risk documents.

The appropriate thing to do is. And I've seen it multiple times. They just point you back to the guidance. Like, you need to follow the guidance step by step and do it.

And that's. That's verbatim, pretty much what they're saying. Like, here's the guidance. Here's the section. Please go read it and then do the thing. Here's the guidance here. And that.

That's what I see consistently in VA responses.

And I don't know how human factors ended up being like the stepchild in the corner that nobody wants to be friends with, but it did. I'm not really sure, but I mean, a lot of engineers, I graduated in engineering school, so I feel like, because it's rooted really in cognitive psychology and social science, I think it gets like a little bit of a.

Oh, you're a psychologist. And I'm. I'm. I'm not.

I'm not. I don't talk to people about their feelings because I don't want to. That's not my jam. My jam is thinking about how people process information, which is what people like me do.

And that's much more interesting. And that's why I think the application of H F has been, I guess, forced down people's throats because there were just way too many errors happening in hospitals. That's why we're here.

Etienne Nichols: I was talking to somebody earlier today. I was on a webinar, and they said, is my background correct? It says, you know, that has the word there, but it Looks like it's backwards.

And I said, well, raise your right arm. So, he raised his right arm. I said, is it your right arm?

Because if it was a video of you, it would be showing the opposite arm. And so, I see it correctly, but it's mirroring you and it's. He said, was that a mistake?

I'm like, no, that's a UX thing. That's good. Because if I raise my arm and it's on the other side, it's confusing to me. But I want, I expect a mirror in my Zoom image.

And he's like, oh, it's interesting that, you know, HF is a requirement. Human factors is a requirement. Some of these things that's not necessarily risk related necessarily, but perception, yeah, perception, affordances, you know, signifiers and all those different things are not necessarily required in a lot of other industries, but the market demands them. And in Medtech, for some reason I, you talked about why is it the redheaded stepchild’s or whatever you want to call it. I feel like in some ways because of whether it's reimbursement through CPT codes, you know, and it's, it's, it's multiple layers, hospital buyers.

The market doesn't necessarily demand it as immediately that these things be mitigated or smooth and useful and easy to use or at least, I don't know, cognitively not. Don't cause cognitive dissonance.

But eventually they will if you don't, if you will. Even if you get to market without proper, you know.

Staci Miller: Yeah, I don't know a single product that's made it to market without doing the assignment. Like, and I can't stress this enough, it's like your professor wouldn't accept it, you'd get a bad grade. Why would you expect anything else from a regulatory body that's asking you to do this thing?

Right?

So, I like using those, that language as well. And you're right. Affordances and mirroring and cognitive function and cognitive dissonance exist everywhere. Like we use Zoom all the time. We use Gchat all the time.

And when those things don't work, it's really irritating.

It's really irritating. It's like it'll. My calendar wouldn't work the other day. I have no idea why.

Couldn't really figure it out.

My computer on and off and it just figured itself out. But now it's doing the same glitchy thing today. And I'm like, are they doing an upload?

Did they test this?

Did it?

Etienne Nichols: And maybe you call it lower risk over there. But hey, there's a risk I'm going to miss my next meeting. And that's a big deal.

If it was a matter of, you know, not getting the blood to the operating room and time because they needed it, that that would be different. So, we do have a pretty high threshold and that makes sense that these rules should be in place to keep things safe and effective.

Staci Miller: Yeah. And I.

People realize like, like we were talking about, the whole point of this is like there is a regulatory pathway. Risk is involved. We focus on the user; we fight for the user.

That's what we do here. It's like Tron, I am the user.

Etienne Nichols: So, I'm going to use a different example too. You know, we. There's so much we can control about the device, but. And then there's the part that we can't. The engineers want to talk about the device. Get the PFMEA, the DFMEA so that we get all. If anything breaks, everything else is mitigated.

We have redundancies, the system.

But okay, this is a very much a stretch. I used to be a bull rider when I…before I became anything else crazy.

And when they judge a bull ride, 50% of the points go to the bull, 50% go to the rider, if that makes sense. So, if you are a perfect rider, but you draw a really easy bull, you might still only get 75 points. If you have a really hard bull and you are okay rider, you might even beat that guy. The reason I say that is you got the device, but you've also got the user, and you get. Yep. To get a good score, you need both.

Staci Miller: You do. That's a great analogy. Yeah.

Etienne Nichols: Yeah. I don't know if that one's really what I should use, but.

Staci Miller: No, you're like, I used to ride bulls. I'm like, I was not ready for that at all. Like, not. Not in the world of me. I was like, that's the last thing I ever thought that you would say.

Etienne Nichols: But it's killing me not to go to Houston for some of the rodeo that's going on down there. So, there's MedTech events. I won't mention them right now necessarily. But yeah.

Anyway, any last piece of advice for the audience?

Staci Miller: Definitely. Do your due diligence work with your regulatory people.

Include H F at the beginning. Don't do it as an exercise at the end. It's going to cost you more money than it's worth because you're probably not going to be ready for submission.

And it's not the human factors engineer's fault. Just remember that. Don't get mad at us.

Etienne Nichols: Yeah, try not to.

Staci Miller: We'll put it up. We'll put up with it. But try not to. Just try not.

Etienne Nichols: We'll put up some links in the show notes if you want to find Staci, see where she's, where she is, what she's doing, what she's up to. We'll put the links in the show notes so you can reach out to her.

But just go ahead and throw it out there. Cause not everybody even finds the show notes. Where can people find you? Where? What's the best place to reach out to you?

Staci Miller: Gen UX Consulting and yes, it's a play on words because I am a Gen Xer and we are UX researchers as well. We do all design and research and you can find me on my YouTube channel, Gen UX as well.

Etienne Nichols: Gen UX Consulting. Awesome. Thank you, Staci. Really appreciate it and looking forward to the next time our paths cross.

Staci Miller: Me too. Me too. Let's talk.

Etienne Nichols: You take care.

Staci Miller: Bye.

Etienne Nichols: Thanks for tuning in to the Global Medical Device Podcast. If you found value in today's conversation, please take a moment to rate, review and subscribe on your favorite podcast platform. If you've got thoughts or questions, we'd love to hear from you, email us at podcast@greenlight.guru.

Stay Connected. For more insights into the future of MedTech innovation. And if you're ready to take your product development to the next level. Visit us at www.greenlight.guru. Until next time, keep innovating and improving the quality of life.

 

 

About the Global Medical Device Podcast:

The Global Medical Device Podcast powered by Greenlight Guru is where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge, direct from some of the world's leading medical device experts and companies.

Like this episode? Subscribe today on iTunes or Spotify.