How AI Can Help You Manage Risk

May 17, 2023 ░░░░░░

319 GMDP Header

With the advent of new Artificial Intelligence (AI) tools, where is risk management headed? How are they affected by the changes? Risk management is essential and becoming more so due to increased risk measures, and it’s important to understand what’s happening in the industry. To that end, today’s guest, Tyler Foxworthy, will share his expertise on the subject.

Tyler Foxworthy was the Chief Data Scientist at Resultant, then the Chief Scientist at Demand Jump. Before founding his own company, he applied statistics and machine learning to help the medical device industry make informed decisions about changes to their devices. The company that he founded, Vertex was acquired by Greenlight Guru, where Tyler is now the Chief Scientist.

Listen to the episode to hear Tyler explain more about data quality, the future of risk assessment, and how Bayesian statistics and analysis come into play.

Watch the Video:

Listen now:

Like this episode? Subscribe today on iTunes or Spotify.

Some of the highlights of this episode include:

  • When can we reach a point where we know the data is accurate

  • The future of risk assessment for MedTech

  • Why use a model for predicting risk

  • How this model impacts a company

  • The change in trajectory for the medical device industry

Links:

Tyler Foxworthy LinkedIn

Etienne Nichols LinkedIn

Greenlight Guru Academy

Greenlight Guru

Memorable quotes from Tyler Foxworthy:

“There is no such thing as absolutely perfect data, there’s only degrees of quality.”

“I would like to see it as more rigor, in general, brought to the industry.”

“This whole field of probabilistic risk assessment is firmly rooted in Bayesian analysis.”

“This idea of using, bringing out tools and techniques and knowledge from other domains and fork lifting it into our industry, and making it valuable, to me there’s just something really intellectually appealing about that.”

 

Transcript:

Etienne Nichols

00:00:21.440 - 00:03:19.560

Hey everyone. Welcome back to the podcast. My name is Etienne Nichols and I'm the host of today's episode.

 

In today's episode, we speak with Tyler Foxworthy, and we get to talk about the topic of artificial intelligence and its application to risk management.

 

I know there's a lot of there's a lot of buzz around AI or AIs, depending on how you look at them from a business perspective and from a regulatory perspective. But today we talk about its application to risk management.

 

Risk management is essential to every medical device company and it's becoming increasingly important as the EMDR is requiring heightened risk management measures and the FDA is looking for more risk-based approaches. Tyler gets pretty technical, but he does it in a way that it's easy to understand without dumbing it down to the lowest common denominator.

 

He has a lot of great points about risk, such as, for example, too often risk is qualitative, and it's not based on true numbers. Tyler gives some examples about false precision with some really interesting points about especially about the use of P1, P2.

 

He also talks about newer probabilistic model approaches and so much more. Tyler Foxworthy has over a decade of experience applying advanced mathematics to solve technical challenges.

 

He was the chief data scientist at Resultant, and he was then the chief scientist at Demand and Jump before founding his own company where he applied statistics and machine learning to help the medical device industry make the most informed decisions around challenges and changes to their devices. And that company that he founded was Vertex.

 

It was acquired by Greenlight Guru, where Tyler is now the chief scientist and he's poised to change the way the industry approaches risk management. Tyler is exactly the kind of person that I love to learn from. He's always learning both from inside the industry as well as outside the industry.

 

And he has the heart of a teacher and is passionate about sharing what he learns. It's exciting stuff and its exciting time to be a med tech professional.

 

We hope you enjoyed this episode with Tyler on artificial intelligence and how it can change the way you do risk management. Hey, everyone. Welcome back to the Global Medical Device Podcast. Today I get to speak with Tyler Foxworthy.

 

We're going to be talking a little bit about AI and how you can use it to help with your risk management.

 

But maybe before we get into that, Tyler, do you want to tell us a little bit about yourself and where you come from and how you got to where you are today?

 

Tyler Foxworthy

00:03:19.800 - 00:04:19.240

Yeah. Thank you. So, I lead our data science team here at Greenlight Guru.

 

Been with Greenlight Guru officially for a little over a year and a half, but previously I had a AI software consultancy which was actually acquired by Greenlight Guru about a year and a half ago. So, we've been working together for now the better part of four and a half years.

 

And so that's been really interesting to see all these things come to fruition. Prior to that, my background is in applied mathematics and neuroscience, so studied at Purdue and then out of school.

 

Actually, my first job was in the medical device industry. I'm doing algorithm development for clinical diagnostics.

 

And so, I kind of got tired of working on different flavors of the same problem over and over again and left actually found the data science practice at a major management consulting group. Did that for several years, working mostly with governments and Fortune 500 companies and using math to solve business problems.

 

Tried my hand at a couple of startups and then started the last company, Vertex Intelligence, going on five years ago now and here I am today.

 

Etienne Nichols

00:04:19.320 - 00:04:30.280

That's awesome. I really appreciate you sharing some of those things. One thing you mentioned there was use math to solve complex business problems.

 

I'm curious if you have like a favorite dragon slaying story of doing that. Anything you can share?

 

Tyler Foxworthy

00:04:30.440 - 00:05:06.040

Oh, there's lots of stories.

 

I think probably my favorites one would be using network analysis, working with the government on tax fraud detection, using combination of network analysis and machine learning to identify tax criminals. And that's very, very interesting. And then work we did in public health.

 

So, we worked on a number of problems related to infant mortality and child welfare and using data mining techniques to identify inefficiencies in both children's services and healthcare administration and then identifying opportunities for improvement. So those were. Those are really interesting.

 

Etienne Nichols

00:05:06.040 - 00:05:28.340

That's really cool. Yeah, those both seem like they could be pretty rewarding depending on how you come at those angles. So that's pretty neat.

 

So now you're working with AI for medical device companies, specifically working with Greenlight Guru, which works with over a thousand customers now, depending on how many of them are in design and development, working with their risk management. What is it you're working on now and how does AI play into that?

 

Tyler Foxworthy

00:05:28.580 - 00:06:46.290

I think one of the most fascinating things about the medical device industry and being a hyper regulated industry is that there's a ton of data out there.

 

For years the FDA has released a blend of structured and unstructured data relating to everything from new device applications, company registrations and adverse event reporting as well.

 

And so, a lot of our work over the last several years has been getting a handle on all that information and specifically using AI to make the unstructured data sources.

 

So, think the 510 summary reports and the adverse event narratives and turning those unstructured documents into structured data that we can then analyze and do interesting things with and to make recommendations about.

 

So much of our work leading up to this point has been really stitching together this complex ecosystem of disconnected data and making it connected and then enriching it using machine learning then. But in the last year or so our focus has shifted just from this exploration and data collection to applications.

 

And risk has been the number one application that we've been focusing on.

 

So, leveraging this data that, and also leveraging well established models of risk assessment in order to develop the types of models that can help companies make better decisions about risk.

 

Etienne Nichols

00:06:46.690 - 00:06:53.570

So of course, every company has to go through a certain amount of risk assessment. I'm curious, how is it that you can help with risk assessment?

 

Tyler Foxworthy

00:06:54.050 - 00:08:57.150

So, risk assessment, at least as defined by, you know, ISO 14971, requires that companies estimate probabilities and severities of patient harm. What the standard doesn't dictate is how to actually do that, like how to actually estimate the probabilities and severities.

 

And so, it turns out that that's not a trivial thing to do. And it's even less trivial when you don't have a lot of data to work with.

 

Most device companies don't have endless budgets to collect high quality first party data. And so, you have to both make good with the data that you have but also leverage the other data that's out there.

 

And so today you know, standard practices to kind of stitch together information from a variety of sources.

 

So whatever first party data that you can, experimental data that you can collect Yourself, that's sort of one bucket, this other bucket of third-party clinical research. So, scouring journals and looking for estimates of you know, incidence rates for certain types of adverse events, there's expert opinion.

 

But what's not really well discussed in the med device, you know, literature is, is how to rigorously combine those sources of information which each have varying degrees of confidence and sort of a trust factor in those data sources, how to combine those together to create one statistically defensible estimate of the probability of a particular outcome, and also to estimate things like confidence intervals. What we see commonly done in the med device industry is much more rudimentary.

 

Oftentimes companies will just more or less make up probabilities and they kind of use it as a more of a qualitative exercise than something that's really rigorously grounded in data.

 

So going back to where our solutions come in, what we've done is we've built out a mechanism for both mining the information that is available, but then also allowing the user to supply their own information that they have and then using that to update their beliefs about what the probabilities are using a process called Bayesian analysis. And that's really the foundation of our approach.

 

Etienne Nichols

00:08:57.550 - 00:09:53.950

That's interesting. So, because you mentioned how inaccurate some of the data could probably be because it's just a rough estimate most of the time.

 

And it's interesting the word you use with qualitative. So, I was actually at a risk management workshop at MedCon led by former FDA employees. Kim Troutman is one of them.

 

Multiple others who are probably well-known names, if I could just remember their names. But they were talking about the EMDR and how a lot of the risk is going to have to be mitigated.

 

A lot of your previous risk management files are going to have to be mitigated because they are not going to be accepting any qualitative risk assessment. It's all going to have to be quantitative.

 

And in my mind when I think about this, I'm like, okay, well you know, a lot of this may have been based on previous risk analysis, which may have been qualitative, which may have been based on previous qualitative risk analysis. So, at what point can we say the data is accurate? We're not being. I know, I've heard you use the phrase false precision that we're not using that.

 

How do we approach something that's more accurate?

 

Tyler Foxworthy

00:09:54.290 - 00:10:55.270

Well, so I like to defer to the Bayesian definition of probability as it's degrees of subject, degrees of belief about an outcome so there is no such thing as absolutely perfect data. There's only degrees of quality.

 

And so, in order for us to make good estimates, we first have to understand how good and how consistent the data is to begin with, and then to bake that into our analysis. It's so that the output of risk assessment should necessarily include estimates of confidence.

 

And so, we can't always say on the, you know, a priori how good or how accurate something is going to be. What we can just say is we believe that we'll say the probability of harm is going to be within a certain range 95% of the time or 99% of the time.

 

And so, our goal is to introduce data into the system that shrinks and narrows those confidence bands into tighter and tighter ranges so we have a higher degree of confidence and belief about what the true probabilities are looking likely to be.

 

Etienne Nichols

00:10:55.430 - 00:11:03.190

Okay, that makes sense. I'm curious, are there any other industries that are doing anything like this? It seems very unique to come up with something like this.

 

Tyler Foxworthy

00:11:03.430 - 00:12:36.290

It's actually not. And that's somewhat surprising to me as we got into this problem.

 

But the domain of probability analysis that we're talking about is something called probabilistic risk assessment. So, it's basically a whole field of risk assessment built upon the ideas of Bayesian statistics.

 

And this was really pioneered by the nuclear energy industry and NASA going back into the 1960s.

 

They had very similar problems to the medical device industry in that it was both non economical and oftentimes unsafe to collect large volumes of experimental data in order to evaluate risk before systems went into production.

 

And so, they had to make do with the best that they had in situations where there are still high degrees of uncertainty, in these situations where you had very, very small probabilities of extremely bad things happening. And that's the situation where we find ourselves in the med device industry.

 

And so really what we've done as a team is we've built out this data infrastructure in order to get the best snapshot of the data that is available to us in order to help us make decisions.

 

But then combine that with these models which have been established as best practice in these other high-risk industries and then brought them together to create this risk intelligence solution. So, what we're doing is very much built upon this very firm foundation.

 

We're just bringing a different data set to the party and then contextualizing that specifically for some of the unique regulatory and risk aspects of it for the medical device industry.

 

Etienne Nichols

00:12:36.610 - 00:12:53.740

Okay, so when we talk about risk assessment in the future, then I don't know. I'm curious what you see as the future of risk assessment for MedTech. Multiple AIs doing this with multiple options.

 

I mean, I guess, you know, there's always going to be other players in the market, but is that kind of what you see is going to become the norm?

 

Tyler Foxworthy

00:12:53.900 - 00:13:53.220

I think the first norm that should be established is the idea of using models for risk assessment and to move away from this very qualitative exercise of just pulling numbers out of the hat and multiplying them together. And it is P1, P2 sort of thing.

 

I would like to see just more rigor in general brought to the industry and that to become standard and the AI aspects will really continue to evolve to help us organize and collect more structured data as the inputs to these processes.

 

And eventually I can anticipate something like, you know, an AI powered co-pilot where you could ask questions about foreseeable events and hazardous situations and sort of these other types of things where we don't have good data about today and to help us propose, you know, potential scenarios and that type of thing. And then certainly I think there are tremendous opportunities as well to use AI as a force multiplier for post market surveillance as well.

 

Etienne Nichols

00:13:53.460 - 00:14:24.190

Yeah, I can definitely see that that makes sense. Oh, and maybe I should have asked this earlier. When I'm thinking about this, I'm thinking, okay, well I can totally see the benefit here.

 

But I'm also wondering, you know, there are lots of different medical device companies who, they may say, well, you know, I've gotten by this far doing it this way for as long as I have qualitative or no, maybe I do qualitative early on and then I update that risk management file. That's, you know, I'm quote unquote, because who actually does that?

 

So, what do you say when people say, why should I use a model for predicting risk or estimating risk?

 

Tyler Foxworthy

00:14:24.510 - 00:14:47.700

There are a lot of reasons why you should, why you should use a model.

 

I'm trying to rattle down a short list, but I think one of the most important things to really recognize that people need to recognize is the limits of humans. Our innate inadequacy at handling very, very small probabilities and also reckoning about very large numbers.

 

Etienne Nichols

00:14:47.860 - 00:14:48.900

Give me an example. Or.

 

Tyler Foxworthy

00:14:48.900 - 00:17:53.190

Yeah, human intuition really breaks down around past odds that are lower than 1 in 100.

 

And so subjectively estimating probabilities of these very rare, rare events and not having a good mechanism for thinking about confidence intervals around them is often worse than Guessing, because if you think about going back to the P1, P2 paradigm, so you're estimating the probability of a hazard, and then you're estimating the probability that the hazard transitions into a harm. So, you have two numbers which you have made up more or less with no confidence intervals around them. You're not multiplying two numbers together.

 

You're actually multiplying two probability distributions together. And what ends up happening is that your confidence interval grows like quadratically. So, it's not multiplying your uncertainty by 2.

 

You're compounding it exponentially. You're compounding the uncertainty exponentially.

 

And if you're not even aware that you're doing this, you're introducing more risk into a system because you're effectively creating false precision, which then impacts your, you know, the business decisions that you're making. It impacts what you're prioritizing strategically for improvements.

 

So, you know, using models helps you get a handle on, at a minimum, knowing what you don't know, quantifying your degree of uncertainty, because it forces you to think in terms of the information that you have and the quality of information. Those are the inputs to the system, not your subjective initial estimate of probability.

 

So, I mean, that's certainly on the front end, you know, a big reason to rely on models. And then you mentioned the idea of updating the risk file over time as new information comes in.

 

There is a very established series of best practices going back to probably list.

 

The whole field of probabilistic risk assessment and how this has been done in aerospace and other industries are very established ways of doing this, which are not standard practice in the medical device industry. If you're heuristically updating probabilities, you're leaving a lot on the table.

 

Because unlike, let's say, take aerospace, you know, for example, a lot of the problems that occur in a device, a lot of patient problems are actually correlated with one another, structurally correlated with one another. So, observations of adverse events of a certain type. So, let's say you had different types of infections that could occur post operative.

 

Those are probably associated with similar hazardous situations, right? So, they might have distinct outcomes, but they have common causes.

 

So unless you have a mechanism for linking that information and not just updating the probabilities associated with one of those event types, but actually leveraging the correlation as well, and then back propagating that through the system and updating all the probabilities, you're going to end up drifting into your specifications are ultimately going to drift Very far from like the true distribution risk.

 

Ultimately you want a system which is perfectly adaptive, which over time, if one thing changes, the potential to change everything in a system should be on the table.

 

Etienne Nichols

00:17:53.350 - 00:18:30.180

So, I'm thinking of ways that this could potentially save a company money or save them time, save them resources, make medical device product safer. And I guess on the beginning side I'm thinking, okay, so what if the risk is reduced like your risk assessment, you have a lower probability.

 

I guess I'm seeing fewer samples tested potentially maybe it's higher and you're going to be a little bit more strict with that. Maybe you'll increase some of your design controls, you'll have a safer product in the field.

 

But that's just kind of my, just I know you've thought a lot more about this. What are your thoughts as far as impacting a company by using this type of model? You've already given some thought there, but what are your thoughts?

 

Tyler Foxworthy

00:18:30.180 - 00:19:50.320

The first two examples you brought up are excellent.

 

So, the potential for, I wouldn't say shrinking, but I'd say right, sizing sample size for clinical evaluation because it could go in either direction technically. Right. Depending upon what initial information that you could gather from public, from public sources.

 

So, from FDA data, from journals, expert opinion that you can back into more effective sample sizes. So that way basically your sample size is more likely to match your effect size.

 

So, there's that fine tuning detection limits for post market evaluation and update. So where should you set your thresholds?

 

At what point do you have reason to believe that probabilities have changed and at what point do you have reasonably that there really is a systematic problem that's been identified? So, both of those are very important economic questions because it pertains to then post market surveillance testing, quality control.

 

So, there's a lot of, in that domain that's really important on the front end. It also reduces the amount of uncertainty and the time to do risk assessment on the front end.

 

And so if we can use data to more effectively identify potential hazards of harms and then reduce the amount of time going through sort of back and forth exercise of horse trading about what the probabilities and severities likely are because you have, you're working from more of a rigorous template, then you're also going to ultimately effectively reduce your time to market too.

 

Etienne Nichols

00:19:50.480 - 00:20:12.400

Yeah, that makes sense. That's really cool. It's exciting to me to learn about these, the way AI is being applied in different ways, and this is probably futuristic.

 

I think that's kind of where you place this when we were talking about the chatbots helping with identifying potential foreseeable situations or hazardous situations. I don't know. Is that something that you see on the horizon or how far away is that?

 

Tyler Foxworthy

00:20:12.560 - 00:20:13.400

Sooner than you think.

 

Etienne Nichols

00:20:13.400 - 00:20:13.920

Okay.

 

Tyler Foxworthy

00:20:13.920 - 00:20:14.320

Yeah.

 

Etienne Nichols

00:20:14.320 - 00:20:36.820

That's exciting. What are the thoughts on applying this? I mean, you've covered where it came from. You kind of covered the reasons why you would want to do that.

 

But what are some of the other. I don't know. Can you open the kimono a little bit and talk a little bit how you do this? You talked about the, was it the Bayesian techniques?

 

I'm going to get that wrong. So, I don't know. If you could tell me a little bit more, I may go over my head, but I'm really curious to learn more about this.

 

Tyler Foxworthy

00:20:37.060 - 00:21:30.650

Yeah, so as I mentioned, this whole field of probabilistic risk assessment is firmly rooted in Bayesian analysis. So, there are two camps in statistics.

 

You have frequentists, which is a more traditional, purely empirical form of statistics, and you have Bayesian statistics, which brings into this idea the ability to inject prior knowledge about outcomes and to have that constrain the set of likely resulting outputs. So oftentimes an example that's used to explain this is batting averages with baseball.

 

Okay, so if, let's say, sight unseen, you had a baseball player who came out of a top tier school or, you know, a major league team, and you did not know what his batting average was. But given that, say, playing for the, you know, I'm actually horrible with teams, so it's probably not the best. I don't really follow sports.

 

Etienne Nichols

00:21:30.890 - 00:21:32.090

We'll go with the Yankees.

 

Tyler Foxworthy

00:21:33.130 - 00:24:05.370

Yeah, so let's say that he's playing for the Yankees, that you would find that a batting average of, let's say one out of ten is probably implausible. Right. You would also find the idea that a batting average of 90% or 9 out of 10 would also likely be very implausible as well.

 

A frequentist, in order to determine what that batting average was, would have to sit there and watch that player swing, miss, swing and miss potentially dozens of times in order to, in order to have a high degree of confidence that their batting average is ultimately something like one out of three or, you know, one out of four.

 

A Bayesian, on the other hand, would start with an initial belief calibrated to the knowledge that that player plays in Major League Baseball, that the average batting average is, let's say, somewhere around 1 and 3.

 

And so, they might only have to watch a Couple of observations in order to at minimum rule out that one in three is still a plausible batting average for that player. And so, every observation updates that prior belief.

 

So, you might start at 1 and 3 and then they hit and maybe goes up a little bit and they miss, maybe it goes down a little bit.

 

But pretty quickly you converge to an estimate of what the belief is around the long run batting average as well as a confidence interval to say it's 99% of the time or 95% of the time going to be between, let's say 25% and 35%, something like that. And we can do that with far fewer observations. And that's really the power of applying Bayesian analysis to risk, is that we can use expert opinion.

 

The first observation could come from a clinical expert who says, I've seen this 10 times out of, in my 20-year career, which might involve 10,000, 10,000 surveys.

 

And so, you could use an expert's opinion in order to set a baseline, a range of plausible or multiple, a survey of experts to come back with a range of plausible outcomes.

 

And then we have data that we can bring to the analysis and say, well, here's what the data says, and we can then update those beliefs again based upon the data. And then if the customer has direct experimental evidence, then you can refine those beliefs again and again.

 

So, you can get more out of the data that you have by starting with a plausible set of beliefs and simply updating them than needing large amounts of direct high quality empirical evidence in order to come back with an estimate of what the probability is likely to be. So, it's that all about shrinking that uncertainty as you collect data.

 

Etienne Nichols

00:24:05.450 - 00:24:34.050

Okay, that example really makes sense. It really helps me kind of quantify my own mind what you're talking about here. This is fantastic.

 

Another thing I'm thinking of is, okay, so let's say I transition as a medical device company to this new way of approaching risk and analyzing probabilities. How do I know?

 

It's really helped me now I already know, you said, and this, I wanted to kind of highlight this, you mentioned that just guessing qualitatively, it's actually worse than if you didn't do anything at all or I don't know, I may be misquoting.

 

Tyler Foxworthy

00:24:34.050 - 00:24:35.730

Exactly. No, no, you're not.

 

Etienne Nichols

00:24:36.370 - 00:25:12.450

Okay, so your example made me think of something they taught us in engineering school. They said anyone can extrapolate, engineers interpolate. And so, you use your bounds that you have.

 

So, okay, let's say I adopt this method you already mentioned the nuclear and the aerospace industry use that, which obviously they have. There's an expectation that they never fail. And obviously, okay, that 99.9% there is that 0.01% that could fail.

 

So, we know that's a possibility material that gets through all that sort of thing. However, when we start applying this to the medical device industry, what's your pass fail on this?

 

What's the change in trajectory for the medical device industry?

 

Tyler Foxworthy

00:25:12.930 - 00:26:52.500

A couple of things. Most importantly, quantifying and identifying where there's uncertainty.

 

So, you'll notice that I'm sidestepping statements about accuracy because it really ultimately depends upon the data that they're bringing to the party. Right. So ultimately, understanding where there's uncertainty and where there is risk is going to improve outcomes over time, right? Yeah.

 

And so being statistically rigorous about the process is going to improve outcomes because you have improved the process.

 

So, I think that, you know, to the extent that companies can leverage the data that we're providing and leverage the process that we're providing, continue and then add to it and then build upon it and within a post market context will dictate what they get out of it at the end of the day. But to your point, you know, these processes have been iterated on for decades at this point.

 

So, there's a road map here that if you put in the work and you put in the effort, you know, to be clear, there's no world in which we would ever advocate for a copy paste, modify approach to this.

 

Like we're bringing data, we're bringing process, but it's still incumbent upon every company and every product developer to do their homework and to add in the additional context and knowledge.

 

Because you and I both know that the companies that are making these devices and the engineers who are building them and the clinicians who are advising the process know more about their device than anybody else in the world. Right.

 

So, we want to give them the tools to be able to reason about risk and to leverage the data that's there and then use that as a jumping off point and they give them that very prescriptive process for how you know, how to make most of their data. But ultimately, it's just a jumping off point.

 

Etienne Nichols

00:26:52.660 - 00:27:23.580

I don't want to minimize that because it almost feels like when you say it's just a jumping off point, you're like, well, that minimizes the impact.

 

But I want to go back to that baseball player for just a second, think about how many times you'd have to watch that Guy, you know, using the frequentist method, I believe you said before, you have that baseline and then you're going to actually have your battering average. But how much work did it just save you?

 

I'm guessing if I talk to myself, if I'm looking for somebody to take my job for me and do my job for me, that's probably not realistic, but doing a third of my job is pretty impressive. So that's really interesting.

 

Tyler Foxworthy

00:27:23.740 - 00:29:40.940

And there's other questions too that don't require FDA data at all, which are still complex and nuanced to answer that. Unless you have a lot of experience in statistics, which, you know, most of the medical device market is SMB.

 

I think something like 85% are fewer companies have fewer than 20 employees.

 

So, you think about a company of that size, you know, expecting them to do everything they're already doing and then learn Bayesian statistics in order to, you know, apply these approaches by themselves, that's just not feasible. Even basic questions like say you did a survey, you know, clinician survey, asking about risks and probabilities and severities for certain outcomes.

 

How do you come back with an average? How do you average those out?

 

So, if you had 10 physicians give you 10 different probabilities associated, let's say, with a particular post op complication, what's the right answer? What's the correct average? What's the confidence interval around their probabilistic risk assessment?

 

Hurt and some of these other techniques that are kind of under that umbrella do have very established mechanisms for doing that.

 

And that's something that our platform is providing, that it's giving, I'll call paint by numbers approach to answering those types of questions, which if you just gave them and said, hey, here's some data, here's a marker whiteboard and try to figure this out. Most companies would struggle with it. Frankly, most people with even undergraduate degrees and statistics would struggle with that problem.

 

Because it's not really in that common space of problems that people solve every day. It's not like just picking a mean and a median, a standard deviation. You know, there's a lot of nuance to those problems.

 

So then even think about, okay, that's problem A, Problem B is, hey, we got all this clinical data that's coming from these 20 different journals and previous related clinical studies that have been done. And how do we blend those probabilities together? Oh, and then we got all this data from the MOD database. How do we leverage that?

 

Going back to baseball example, where's our starting point?

 

What's plausible from the get go and how do we shrink that down till we get something closer and closer to what's likely to be the truth so that then we have a firm foundation so that once we do go post market and we start collecting data, then we start updating, we can continue to update our beliefs and in the long run we'll have a really good idea of what all these probabilities truly are.

 

Etienne Nichols

00:29:41.100 - 00:30:08.920

Yeah, that's a really good point. I think it's. It kind of behooves us as a medical device industry anyway.

 

This is going to sound very rhetorical, but we should be updating those risk management files. And I don't know if how many people do that, but in order to have that accurate data you're really going to have to do that.

 

So, I just throw that out there. This is really good. I really appreciate you kind of sharing all of that you're working on with AI and risk management.

 

Anything we missed or anything else you'd like to add. It's hard to sum up years’ worth in an hour.

 

Tyler Foxworthy

00:30:08.920 - 00:31:44.000

I'm sure it is.

 

I'll just say that we're really excited about where all this is going, you know, for us to this idea of using and bringing out tools and techniques and knowledge from other domains and forklifting it into our industry and making it valuable. To me there's just something really intellectually appealing about that because one of the folks that I really admire is Charlie Munger.

 

So, he's a famous investor, he's a Warren Buffett's Warren Buffett's partner. And that's really something that he long advocated for.

 

This idea of taking the best of what works from multiple disciplines and not being afraid to go outside of your discipline in order to find good ideas and not reinventing the wheel. That's really what we're doing here. We're not, we haven't invented a new approach to risk management.

 

We're just taking the bits that work from other areas and the best practices and then adapting them for our use. And there's very rich literature in this domain too that we can point our customers to.

 

And I've got a 600-page NASA risk management manual on my bookcase back there that anybody can buy.

 

And so, I think that just by shining a light on these techniques and then giving our customers a low friction way to get started so they don't have to figure out how to implement these things from scratch, but to also know that we're making it easy for them. But all this literature is out there. These approaches are stamped, have been validated and have been adopted and used for now multiple generations.

 

To me, that's just very, very satisfying.

 

Etienne Nichols

00:31:44.080 - 00:31:58.800

Well, it's exciting and I can't wait to look back 10 years from now and see how the medical device industry changes, because risk management is definitely an area where we have area for room for improvement. I guess I'll just say that. So, it's fantastic and appreciate all that you're doing to improve that area.

 

Tyler Foxworthy

00:31:59.040 - 00:31:59.920

Yeah. Thank you.

 

Etienne Nichols

00:32:00.080 - 00:32:05.360

All right, well, thank you so much. I really appreciate you being on the podcast and I'll let you get back to the rest of your day.

 

Tyler Foxworthy

00:32:05.360 - 00:32:06.640

Sounds good. Thank you very much.

 

Etienne Nichols

00:32:07.850 - 00:33:53.720

Thank you so much for listening. The future of risk management is about to change.

 

Medical devices may be approaching a tipping point the way baseball did back in 2002 with Moneyball, using a more statistical approach to analyzing players.

 

If you're interested in how you can do the same thing for your medical device company or the devices that you are currently developing and really jumpstarting your risk management approach, check out Greenlight Guru.

 

We've built the foundation for a Bayesian approach to risk management, and we can show you how quickly you can get your risk management off the ground. So, check it out at Greenlight Guru. If you enjoyed this episode, reach out to Tyler Foxworthy on LinkedIn and let him know.

 

Also, I'd personally love to hear from you. Let me know, did you like it? Did you not reach out to me? Etienne Nichols, Greenlight Guru or look me up on LinkedIn?

 

Drop me a line if you're interested in learning more about the other software that we have built specifically for MedTech.

 

Whether it's our document management system, our CAPA management system, design controls in addition to the risk management, or even our electronic data capture for clinical investigations, this is software built by MedTech professionals for MedTech professionals. Check it out at www.greenlight.guru. Finally, please consider leaving us a review on iTunes.

 

It helps others find us and it lets us know how we're doing. Really appreciate it. Take care. The medical device industry is nothing if not unique. So, we built software that works the same way.

 

Greenlight Guru is the only quality management system designed by medical device professionals to meet the unique needs of medical device companies. Our cloud-based platform allows companies to bring safer products to market up to three times faster while reducing risk and lowering cost.

 

Visit www.greenlight.guru today to request your free personalized demo of Greenlight, guys.

 

About the Global Medical Device Podcast:

medical_device_podcast

The Global Medical Device Podcast powered by Greenlight Guru is where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge, direct from some of the world's leading medical device experts and companies.

Like this episode? Subscribe today on iTunes or Spotify.

Etienne Nichols is the Head of Industry Insights & Education at Greenlight Guru. As a Mechanical Engineer and Medical Device Guru, he specializes in simplifying complex ideas, teaching system integration, and connecting industry leaders. While hosting the Global Medical Device Podcast, Etienne has led over 200...

Search Results for:
    Load More Results