Best Practices for Clinical Evidence Management

September 22, 2022

281 GMDP-header

How do you bring a medical device through the development process? What are issues that development engineers face during the different phases of the design controls process, especially when researching clinical evidence?

In this episode of the Global Medical Device Podcast, Etienne Nichols talks to Keith and Kevin Kallmes of Nested Knowledge. Following in the footsteps of their father who was a medical device inventor, the two brothers designed and developed a balloon catheter that was acquired by a major strategic.

Listen now:

Like this episode? Subscribe today on iTunes or Spotify.

Some highlights of this episode include:

  • Keith and Kevin focus on clinical, regulatory, and publication consulting in the data management/data analysis services industry.

  • Like other entrepreneurs, Keith and Kevin understand the importance of central clinical outcomes data in every stage of the medical device development process for a product rooted in an unmet clinical need.

  • Engineers tend to focus on the immediate task at hand. Entrepreneurs have to consider why startups succeed or fail. Rather than fixating on money, the biggest reason devices fail is a lack of product market fit.

  • Companies should look at the science before the development. Examine clinical evidence, decision-making, prototyping, and design controls.

  • Customer Discovery: Read literature and talk to clinicians about needs. They are on the frontlines of patient care and research-capable.

  • To find scientific evidence and outcomes, search for different articles and filter/screen the articles to know when you have enough relevant information.

  • Calculate risk, know timelines, perform tests, and document design history accurately or start the design process over and learn an expensive lesson.

  • Which evidence is leading to which decisions? Share and manage evidence to make sure it is usable and effective.

Links:

Keith Kallmes on LinkedIn

Kevin Kallmes on LinkedIn

Etienne Nichols on LinkedIn

Nested Knowledge

PubMed

FDA - Medical Device Overview

ISO - Medical Device Testing

GG Academy

MedTech Nation

Greenlight Guru

Memorable quotes from this episode:

“You really realize how central clinical outcomes data are to every stage of the medical device development process.” Keith Kallmes

“It’s looking at clinical evidence and convincing end users and purchasers that your device is worth the cost.” Keith Kallmes

“You should be making good decisions and good decisions should pre-exist everything in the development landscape. Design controls should come only after you’ve basically validated your need and you’ve prototyped enough that you know that you should continue.” Kevin Kallmes

“Lack of product market fit - that’s why devices fail. They’re not addressing an unmet need in the marketplace.” Keith Kallmes

“Start your design controls at concept.” Kevin Kallmes

Transcript

Announcer: Welcome to the Global Medical Device Podcast, where today's brightest minds in the medical device industry, go to get their most useful and actionable insider knowledge, direct from some of the world's leading medical device experts and companies.

Etienne Nichols: Hey, everyone. Welcome back to the Global Medical Device Podcast. This is Etienne Nichols, your host of the podcast. In today's episode, we got to sit down with two brothers, who designed and developed a medical device, a balloon catheter, which was then acquired by a major strategic. These guys names are Keith and Kevin Kallmes. Keith and Kevin have a unique... Maybe not so unique story of bringing their device through the development process. In today's episode, we dive into the details of some of the issue's development engineers face during the different phases of the design controls process. Specifically, in regards to researching clinical evidence. Kevin is a serial entrepreneur in the contract research medical device and clinical evidence management software process. He holds a JD from Duke, and founded Nested Knowledge after serving as CEO of Marblehead Medical, a neurovascular device company, which was acquired by a major strategic. Keith, his brother, is also a serial entrepreneur in the contract research medical device and clinical evidence management software spaces. He is now the President of Nested Knowledge. These guys are a ton of fun. They have a lot of wisdom to offer early stage development engineers. They're actually a great example of how it's possible to work with your family, with your siblings. So, anyway, I love these guys, so hope you enjoy the episode. Hey guys, welcome back, I'm excited to be with you. Today, with me, Keith and Kevin, guys... First, maybe before we just jump straight to our conversation about the clinical evidence, we can talk a little bit more about that in a minute but, you guys have a unique life, I guess, in my mind, when I talk to you guys. Where are you guys coming from today? And tell the audience a little bit about your lifestyle.

Keith: Yeah. I can take that one. We're both currently in Minnesota, where we grew up, but we're actually digital nomads. Jetting off tomorrow in the car to Salt Lake City, and Kevin's going to join me the week after in Jackson Hole. We visited 145 cities in the past 12 months. So, our lifestyle is pretty much, find a mountain, find a cool place in the American West, whether that be Montana, Wyoming, Idaho, Washington, Utah, Colorado, get an Airbnb, post up, work, hike...

Kevin: Bring wifi...

Keith: Exactly, exactly. And then ski, if it's the winter, we'll be skiing, not hiking. Yeah. It's also good for business. Lets us meet partners, clients, coworkers, more often than if we were in one location all the time.

Etienne Nichols: Yeah, that in real life networking is very real. That's good. I'm glad you guys are able to do that. I'm also... I'm going to live vicariously through... I think I told you that when I met you, Keith, last week at True Quality. I still haven't figured out how to live that way with my three kids. Maybe 10 years or so, we'll figure it out, but glad you guys are with us today. Today, we going to be talking about... Well, why don't you go ahead and talk to us a little bit about the clinical evidence, maybe leading up to this conversation, how you got into this space and maybe some of your background, as far as that goes.

Keith: That is a great question. I think our background's pretty much informed how we got into clinical evidence management. When I was a senior in college, I started a very boutique contract research service firm, that Kevin very swiftly joined and took over management of. That was laser focused on stroke and aneurysm. We now do, it's still in existence, clinical consulting, regulatory consulting, publication consulting, really anything in the data management, data analysis services industry, laser focused on the neurovascular. So, that was how we got our start. Very quickly after that we wrote an SBIR for a Mayo Clinic device that our father and his trainee had actually invented. They asked us to then run the company after we got that SBIR grant. So Kevin and I got thrown into the deep end immediately on developing a medical device, running a clinical study on that medical device, getting two 510(k)s on it, launching it in the market and then finally exiting to strategic. So we've been both on the vendor side of the clinical evidence management world and on the consumer as a medical device professional. Seeing it from both sides, you really realize how central clinical outcomes data are to every stage of the medical device development process. So starting from square one, physician or unmet clinical need driven innovation. This is something that so many entrepreneurs talk about. Our product is rooted in an unmet clinical need. Well, how do you find that unmet clinical need? You go to the clinical literature, you analyze the data, you narrow in on it. Next, when you're developing specs, you're going to prioritize based on what the clinical evidence says is the greatest unmet need. When you're designing a clinical study, if you want to be able to use that data and prepare it to your competitors, you're going to need to go to the literature, find the clinical evidence they collected in your study and base your protocol off of that. That's what Kevin did when he designed our clinical study. Regulatory documents, that's a no brainer, especially if you're going to Europe or Japan, you have to include comprehensive clinical evidence reviews and analysis of your product and competing products if you want regulatory clearance. And then finally, any sort of marketing claim, any sort of communication with healthcare providers or hospitals has to again be rooted in this clinical evidence that's called market access or health tech assessment or health economics. It's all the same thing, it's looking at clinical evidence and convincing end users and purchasers that your device is worth the cost.

Etienne Nichols: Yeah. I love that overview. Just a high level overview, but I might even want to go a step further. So let's back up to the development cycle because I know we want to focus on the clinical evidence and the management of that evidence, but there's a lot of things to unpack within that whole phrase. So if we go back to the development cycle, maybe even to design controls, I don't know how far back we need to go. But how far does this reach back to and what advice do you have for companies who are, they're going to have to do that? Maybe they're like you mentioned with Europe, it's a no brainer, you have to provide these different things. Let's go back to the development cycle. What advice you give to them early on to make that easier in the long run?

Kevin: That's a great question. And I think there's even the question there about when should I start design controls, which I've gotten that a lot from device entrepreneurs. I actually think that you should be looking at the science before you look at development. And I mean this, before you sit down and start prototyping, predesigned control, pre prototype, that's when you should actually be examining the clinical evidence. There's several reasons for that. So if you think you have a good idea, then a number one needs to help patients and to help patients, it needs to have some gap in what is currently being provided to patients. It doesn't take a huge investment, you don't need to get a physical facility, you don't need to do a ton of work in order to just go out there and try to figure out, is there a need. And I think that there's more than just a clinical evidence that should inform that. You should be looking at other people's devices, you should be looking at the landscape in the medical device world before you really... Physical infrastructure is expensive and prototyping is hard. Those are tasks that take sourcing, they take weeks, they take... There's just a lot of inputs that go into prototyping that you don't actually need in order to validate the need. There's actually no way at the bench to prove that a device is needed just to prove that a device works.

Etienne Nichols: Yeah.

Kevin: So I'm a huge proponent of do easy things first, do cheap things first, but also do important big research tasks before you're going to spend a lot of money actually moving forward to the program. So I actually think clinical evidence should pre- exist really any part of the development cycle. Because that's decision making, you should be making good decisions and good decisions should preexist everything in the development landscape. Design controls should come only after you've basically validated your need and you've prototyped enough that you know should continue. You've prototyped enough that I'm not going to kill this idea right after I had it. That's the sequence to me. Yes. Clinical evidence and decision making, then prototyping, then design controls.

Etienne Nichols: So when you say clinical evidence that early on, what does that actually look like? I'm a product development engineer, I'm going to sit down, I'm thinking I'm going to start my user needs and so forth. But in your mind, you're saying maybe even before that, what does that actually look like when you say clinical evidence?

Keith: It's great question. I can take that one. So the zooming out to 10, 000 feet. It's very easy for engineers to get very focused on the immediate task at hand. But if you're an entrepreneur, if you're an engineer in a small company, you have to consider why startups succeed and why startups fail. And people are frequently fixated on money, on burn, obviously that's critically important. But the biggest reason, and I was just talking to a very close friend of mine about this in the medical device industry, is lack of product market fit. That's why devices fail is that they're not addressing an unmet need in the marketplace. If you're a big corp, you could spend a bunch of money and that launch fails and you have your whole suite of products to float you to the next successful or not unsuccessful product. If you're a startup, you live and die on product market fit. So before you start prototyping, as Kevin said, you need to find an unmet clinical need in the clinical literature. So you ask about, okay, clinical evidence, you say clinical evidence. What are you talking about? The first place to start is the clinical literature. Now you might be able to find other sources as well. Those are usually harder to find in terms of proprietary data sets or Medicare databases. You're getting into the weeds then. But anyone can go to PubMed search in their space, find articles that report different clinical outcomes between device type A and device type B. And if you're like us and you're going for 510(k)s with predicates, then you know there will be similar devices out there compared to the people that will become your competitors. Find those and make sure that you're actually addressing an unmet patient need before you even start prototyping. Because even if you build the best device in that class, if that class does not benefit patients, you're going to have a really hard time convincing physicians to use it, convincing hospitals to purchase it, and if need be, you don't have a code convincing payers to reimburse for it. So before anything physical happens, get on PubMed, search for the products for your device, analyze whether or not they're actually fulfilling an unmet patient need.

Kevin: If you're intimidated by what Keith just said, if you're like, what does he mean literature? What does he mean PubMed? Any of those things, I think it pairs really well with talk to doctors. Before you do expensive physical things, you should also be talking to doctors about what their needs are, because they know much better than those of us who are at that. We are all a little bit removed from patient care. They are the only people who actually are on the front lines, seeing what patient's needs are and they also tend to be more research capable than honestly a lot of people in the engineering world. So I think that you can actually accomplish a lot of this by, okay, let me actually go into a hospital. And if it's an interventional procedure, watch the procedure. If it's not, talk to the doctor about how they diagnose how they make treatment decisions. And also integrate into that, okay, what evidence do you use? And I don't know many physicians who are like," Oh, evidence. That's not part of my decision making process." So they can often be a good guide on that front. So talking to doctors can be a starting point if you're not an expert there and they can be your guide into the literature, into the clinical evidence that actually underlies their own decisions.

Etienne Nichols: So usually my job is to get a little bit more pinpoint things, but I actually want to see if I can broaden what you just said when you said physicians and doctors. So I was at a conference earlier this week, talking to a few different types of people, they're seeing more of a broad clinician. And I'm curious what your perception is, is it doctors and physicians, or even the nurses are becoming more and more important.

Kevin: You're totally right. Thanks for calling me out. I said, doctors, what I really mean is people who are on the front lines of patient care.

Etienne Nichols: Yeah.

Kevin: And to me it's not about, there's no credential that really means that you know or don't know there's no specific... I think that everything from nurses, nurse practitioners, PAs, DOs, MDs, anyone who is on the front lines, taking care of patients and has to make decisions for, or with patients. That's the key because life is in their hands when they make those decisions, they're not going to base them on nothing. And so you can actually use them as a resource in your process for figuring out what could be improved there. And if you don't do that, then you're missing out on the context that your device will eventually exist in. If you don't actually understand the clinical practice and you build a device for it and then you come in and you're doing human factors tests, and you're like," Why are they not using it the way I intended?" It's like, well maybe you should have checked from the start how they already practice.

Etienne Nichols: Yeah.

Kevin: Thanks for the call out.

Etienne Nichols: No, I didn't mean to call you out. I just think it's interesting. And so I was at a conference where they were talking about nurses are more important than ever before in the decision making process. And I told them my wife's a nurse, I'm going to go home and she's going to say," I told you so." I just think that's great. And it's good because it's about time we address those unmet needs. The stat I heard recently was 29% of nurses are thinking about leaving the industry and we can't really afford that right now. So that's incredible. So anyway, we can move away from that for a moment. So we talked a little bit about how to get that literature. Maybe we're not quite ready. I have another question about that, as you're going through PubMed and looking at that literature, when do you decide enough is enough? And then what do you do with that? How do you metabolize that into your design controls?

Kevin: That's a great question. And when am I done question happens multiple times as you're reviewing the literature, I tend to break down finding scientific evidence into simple steps. First step search. So you go out and find a bunch of articles. You don't know yet, whether they're relevant, you've put in the best search terms that you can with your first try, put out there and then you have to comb through then what comes back. So step two is usually filtering. We generally call that screening in my world. But yeah, you need a step where you search and then you need a step where you filter because not everything that comes back is going to be the evidence that you want. And that is really the key step where you have to ask yourself, when am I done? Because search is an infinite problem. There could be literature anywhere. It's getting better. That indexes are getting a lot better at finding non predatory journals that you should be pulling information from, but you do need to keep going until you are no longer pulling in any relevant articles. And the way that I usually do that is I run multiple searches. And then as I run the fifth search, if I find only things that are in my first through fourth search and no new information on top of them, that's when I start thinking that I'm done. So search and filter. And when you're searching, you know you're done when you're not finding any new evidence in your filter, there's no gold in the pan anymore, it's all rocks. Then really you then have to examine it and say, okay, what is my actual evidence of interest? What are the devices I care about? And what are the outcomes that I actually want to track? Then really all you need to do is go into the articles that you've screened down to, find the interventions, find the outcomes of interest and pull those out. And that's a much easier question in terms of when am I done? You're done when you have the evidence that you saw it regarding device performance. Now it doesn't always go as well as you'd hope there. Very often people go into the clinical literature and they come back and they're like," Oh, the evidence that I was looking for, wasn't there." And that is definitely discouraging in some sense. But I always find that there's a silver lining there where if there's not a ton of clinical evidence around a certain question that doesn't necessarily mean there's not a need there. That may actually mean that there's a greater need for development there because there aren't that many interventions that are being tested in the entire disease state. So I think on that end, when you're done, when you've found outcomes that are relevant to your intervention or when you failed to find them and you then know that it might be an area that needs just development in general, where you need your first study to actually start generating evidence of whether or not certain device or device classes, efficacious.

Etienne Nichols: That makes sense. If we fast forward, so now we're into the design controls process. I'm going to just do this pretty rapid fire, but we've gotten to that point, we have the evidence, we're confident, we want to move forward with this solution. And then we get into our clinical validation, our clinical trials and so forth. What are the pitfalls that you see people getting into around that?

Kevin: Let's do some storytelling. Okay.

Etienne Nichols: Yeah. That's great.

Kevin: Actually let me do a story about design controls. We started out, as Keith said, our initial experience was helping with and we cared a lot about auditability, we wanted to be good scientists, we wanted to be transparent and replicable, but when we got into the device world, there's a big step. When I say design controls are expensive and take a lot of effort. The big reason there is that there actually has to be an audit record all the way back to effectively your concept. When you're turning around a year or two or five later, when you're like, I'm going to put this on the market, you need a history that goes all the way back. And as an entrepreneur, we just didn't know at the front that that design history file needed to be structured in a way that everyone in the industry would already be able to find exactly what they needed, that had the right tests matched to the right risks, that had all the risk calculations done the way that other people had done their risk calculations. I think we under built our first time around. And we ended up having to take that humble step back and start our design process over. We actually had to build a second device because the first device's design history was inadequate. And that was a huge learning for us. That was a very expensive learning for us. So please everyone out there learn from us when I say start your design controls at concept, use someone who's done it before and honestly, someone who's probably done it at as peerless institution as possible because you want that to be built out way in advance of when you're going to be going to market. Your professionalism needs to pick up way... We like to think about garage engineering in our industry, but you actually have to jump from garage engineering to design control based engineering much earlier than people think. So that's a huge learning for us. The second part of the story, other big pitfalls are timing. So we were very aggressive and even optimistic, I'd say, on timelines to regulatory clearance timelines, to launch. And we didn't quite understand the intricacies of how sourcing affects your DV and V testing. It's like, oh, we just realized that we're 12 weeks behind because we planned DV for when we wouldn't actually have the materials to actually get there. So having good timeline management often comes down to just giving yourself enough play, not trying to plan overly optimistic, overly interdependent timelines on every aspect of the development process really takes a lot of the pressure off of your DV and V testing, your regulatory, your write up. The more you try to layer it. It's like, well, we're going to get our DV and V back on June 18th, and we're going to have 510( k) done on June 20th, and then we'll submit it and then we'll plan exactly 90 days for the response from the FDA and hey, look, we're going to be out on the market by the end of Q3. It's like, okay, that's overly optimistic much in the same way that trying to take a garage engineered product forward past design freeze without adequate design history is just overly optimistic. So I think that's where good planning and calling on people with good experience is absolutely necessary in the process. Keith other stories from our device woes?

Keith: I guess this is a little bit off topic, but I think it's extremely relevant as well. Kevin mentioned talking to physicians. Talking to physicians we characterize that as customer discovery. That's what we call it. That's what the industry calls it. I think that that is too narrow a definition of customer discovery for me, because if you're a startup and you're not trying to build a company that you're going to take public, but you're trying to be a serial entrepreneur. Really your end customer is the strategic. Your physician... You're going to make a lot more money from your strategic acquirer than you will from the physician. So I get these questions all the time from entrepreneurs, probably the number one question I get is how do I know someone's not going to steal my idea? And my dad likes to say, ideas are cheap, executions expensive, talk to strategics early and often. We made that mistake of not talking to the strategics early enough and actually built the wrong device for our eventual customer, which was the strategic. That had a lot to do with, like Kevin was saying, the design controls, but also the final design of that device. We had talked only to physicians rather than also talking with the strategics. I think entrepreneurs are way too cagey about their ideas. Really. There are a lot of ideas out there. I promise yours is not as original as you think it is. It's your execution and your passion that is going to make it happen. And maybe your IP. So talk to those people early and often, consider your strategic equally your customer as you would've clinician, not just physician, clinician, a nurse or physician as well. So a little bit off topic, but I also think it's relevant to the conversation.

Etienne Nichols: Well, after all my participation and rewards, you broke my heart a little bit that I'm not as unique as I thought I was, that's okay. But that's okay. I'll get over it. You brought up something that made me curious. So can you give it a little more detail about what you did wrong? So was it you planned for machining when maybe the strategic is going to use injection molding and you weren't geared up for that tooling or I don't know what your specific product was, but just it's a little bit more detail. I'm curious.

Kevin: So the two big errors were in the actual design. We went from our user needs and actually a key part of anyone's development process. I think the most important document in your development process is where you line up your user needs next to your product specifications and then you do a prioritization exercise. Which we did with the strategic once we're doing round two, building device number two. Going through and actually lining up this user need is driving this product specification. And this is the product specification that we think is the most important to hit. For us, we came in thinking that we wanted to build a super robust catheter for treatment of stroke. When actually what we needed to do was build a hyper navigable, very slender catheter for stroke. And so we prioritized basically stability of the device far higher in the list than we needed to. And the strategic came in and when we did the re prioritization exercise, it was actually getting to the lesion is the most important thing here. And being able to do that through an eight front sheath to avoid growing complications is also maybe number two. So we had those way down in our products spec prioritization list. And it's because we'd never talked to them before. We designed an entire device before we did the prioritization exercise we should have done from the start, which is let's figure out that actually navigability and device sizing is the most important spec. Our design changes were a little painful. You have to go back to the drawing board and throw out something that's effectively your baby. But the other piece of it was I think we planned basically a startups, DV and V, and I think that's all good. I'm all for being as efficient as possible. Try to do as many tests on each device as you can, try to be clever about not using materials you don't have to, not using time you don't have to. But if you want something to be an acquirable device, and honestly, if you want something to be tested to the current standards in terms of avoiding catastrophic failure modes, I think that having robust pre DV is actually the biggest part that I'd say there. So for design validation and verification, you really should be building out as robustly as possible, but you should also do robust pre DV as in run each of your test methods against some devices so that you have evidence going into it that helps support. We're actually, we're looking at burst and we think that burst test is going to be okay. Or you figure out along that pathway that your burst testing is actually found a weak point your device. Then you can go back and alter it. We actually didn't know the term pre DV when we started out. So I think that including that robust pre DV, to give that confidence going into the DV and V testing, and then also to help you identify weak points in your own design process is a huge part of it that's overlooked by overconfident startups that say," Oh, well, testings a formality. Testing is just to show that I'm as genius as I thought I was." I think that's a really important, humble moment.

Etienne Nichols: And I want to add one thing there and I'll let you go in a second here, Keith, but in your pre DV, you mentioned the product, and I think you implicitly mentioned this, but you're testing the product to make sure that you're not going to blow it out, maybe to go back to the drawing board and redesign short ups and things, but you're also testing the test.

Kevin: Exactly.

Etienne Nichols: You want to make sure that you're testing the right things.

Kevin: Exactly.

Etienne Nichols: And so I agree with you, you got to have to make sure that both of those things are... What were you going to say, Keith? Go ahead.

Keith: All I was going to say is entrepreneurs are a necessarily over optimistic bunch. I think in order to make the plunge, you have to have that personality type, but you can, de- risk your optimism with a pre DV test. Also exactly, like you said, testing the test. Which people don't seem to think about. We randomly picked tests in a way. And just because we looked at how other people were testing it, what we thought was recent...

Kevin: What does the ISO say?

Keith: And we never tested our tests on the first one. So 100% agree, equally important to testing your device is testing how you're going to test your device. And even data collection mechanism. How are you collecting that data so you can analyze it? How are you collecting that data so you can share it? This stuff is really boring. No one's thinking about how are we going to share this data with our collaborators, but it's also important.

Etienne Nichols: Yeah. And so let's go ahead and go into there because we talked about probably by the time this is released, we'll have a title or route clinical evidence and clinical evidence management. When you got to that point, you started running those validation studies and things. How were you managing that evidence in a way that was usable and effective?

Kevin: That's a great question. And I think that it's all about the chain of which evidence is leading to which decisions. So for us, the bad way, the first time was we just looked at the 510(k)s of predicate devices and we saw the test that they did. We were like, that looks like a good test. Or we went to ISOs and looked in the appendices to see which test they recommended for certain device class. I think that gets you the baseline. To me, that's like you're deriving evidence from authority effectively. You're basically saying that, hey, I trust the device developer who came before me, he found all the right tests for this device type. Where I actually think that should be driven from is the risks. You should be testing your device with respect to the risks, the risks of patient harm. And I know this is part of the process. You do a whole matrix of the likelihood of a certain risk and how problematic it may be, but where do those risks even come from? How are you populating your list of risks. And I think for a lot of people, it is like that, go to authority. It's a catheter, so the sorts of risks we're looking at are vessel wall damage. But I think that a much more robust strategy is to go to, and I know a lot of people do go to the mod database, which has some level of, I think, very diverse reporting. I think that's really good, but it doesn't have as structured of data driven reporting. It's much more qualitative and descriptive. It's good to read through those things, but if you actually want data on what is the risk, how common is it and what patient outcomes does it lead to? That's actually where, to me, your clinical evidence should slot in the second position. So first use of clinical evidences to determine if there's a need. The second use is to determine the risks. And so when you're actually putting together your trace matrix, and you're going to have your list of risks that you're going to test against and line up, your matrix should show the risk that you're addressing, how common it is, how it affects patients, and then it should have the test attached to the end there. So you should drive through to your test methods from the risks that you find. That's effectively to me, that the process you should be going through.

Etienne Nichols: That makes sense. That's cool. So once all that's done and maybe I should ask when you guys got to that point, when you were running those tests, did you have any other failures that really stand out in your mind? I love the success stories, but I actually think I like the failure stories more sometimes. Because we don't always talk about those.

Kevin: There's some interesting stuff there and not to get too far into things like fundamental nature of chaos, but we definitely found that aging tests were something that throw chaos into your process. You're effectively trying to model entropy on your own device using usually radiation. You're radiating your devices in order to simulate entropy. That's an infinite exposure. So that's an exposure to any problem. And so the other thing about aging tests is that they actually, they're more impactful when they failed than other tests because you waited three months or eight months or whatever it was to get that test device. And so if anything goes wrong, you're going to have to wait a similar amount of time to redo. So I think that aging is something that entrepreneurs also don't think about enough and we definitely had failures around device performs great when it's new, but once you've radiated a ton and exposed it to as much entropy as you can, it doesn't do as well on key tests. And that means that you have to actually build in a lot more... Like when you're doing your pre DV, if you're approaching failure on anything, you should expect that chaos could throw you into it when you actually are going to be doing aged unit tests as well.

Etienne Nichols: Yeah. Okay. But you bring up a good point because if you're using rubber or whatever polymer you're using, I suppose even sterilization could increase that cross- linking, make things stiffer than you might. So if you're using gamma and then the street strategic is going to move towards ETO, I'm a little I'm outside my...

Kevin: Yeah. You're giving me PTSD here.

Etienne Nichols: Yeah. So, okay. No, that's really good. That's cool. I love hearing these stories about people who went through it and you succeeded and now you're on the other side, maybe scathed a little. But what else maybe in specific to your clinical evidence management or really any of those things, what other advice would you give to early stage startups who are working through the design controls and so forth? Anything come to mind? One question that I get a lot is establishing those user needs. And I love that you pointed out that the strategic actually sat down with you and said, let's prioritize these. It made me think of the, I think the diagram, is it jobs to be done diagram where you have the impact over the effort? I don't remember the exact terms right now. And you want to be on the top right, those are the ones that you have to do those. How hard will it take? And what's the impact that it's going to do that. It sounds like they probably had that in mind when they set down and looked at those user needs, what are we actually trying to accomplish here for our indication? So did you have any issues or struggles as you were formulating those user needs. And Keith looked like you were about to say something, feel free to sidestep that question if you like.

Keith: No, I think we should move towards the question you just asked. I was going to say having the correct flow where you're analyzing again, the unmet clinical need, which will then inform the unmet user need. So the question is, oh, well X type of device, outperforms Y type of device. Why don't you use X? That is something you discovered in the clinical literature and then you discern the unmet user need through customer discovery need. Yeah. I've seen that clinically. I know, but it interrupts my workflow or I find current devices to be burdensome to use. Oh, it doesn't work with my favorite adjunctive device. You'll get the unmet user needs by bringing the unmet clinical need to the clinician who uses that. You then translate the unmet user need into specs. And if you've gotten enough customer discovery data from your eventual end users, you should be able to do a prioritization exercise of, okay, well these are the unmet user needs that are informed by the unmet clinical needs. Let's map them out. The last thing I'll add though is especially in physical industries, it's always trade offs. So you're never solving a problem, you are waiting problems against each other. And if you're able to bring concrete data, like for our device, we interviewed 80 some physicians and we actually asked them to rank...

Etienne Nichols: Yeah.

Keith: ...What they saw clinical needs and we asked, we analyzed the data and we said, okay, these are the top specs we need to focus on that will meet the unmet user need. And these we can let fall by the wayside if need be due to trade off. So that's just the flow unmet clinical need informs unmet user need informs spec element.

Etienne Nichols: Yeah. That's great point. Go ahead, Kevin.

Kevin: Yeah. I also like your graph that we're doing in our heads where it's impact and effort on the X and Y axis. The access that I'll add for device is a time axis. You should have a Z axis going back into it. And I'm not talking about effort covers the amount of time input. I'm talking about the amount of time lagged if you don't start it right now. So figuring out what you need to do early so that you don't need it later and don't have it. That's actually the things that are in the top, right and also very, very far along the access in terms of lag between when you need to do it and when it becomes due, that is huge. And that's part of that planning process where I think if you're following good design controls and you have a process where you're going to do pre DV and then DV through it, you naturally will plan better around it, but there's still that open uncertainty of what to prioritize. And I think when you're saying what to prioritize impact, over effort, over time.

Etienne Nichols: That makes sense. That's really cool. I'm trying to think one other question that I had, and it goes back to the flow that Keith was talking about, determining that unmet user need and then flushing out your user needs. I like how you mentioned, life's a series of trade offs. You're not going to solve every problem, you solve the problem that makes the most sense for your population. And then you have to validate that user need. So you're coming full circle and building out that evidence that you've done what you said you're going to do. I don't know if you have anything to add on that, but I thought that was a good flow to just have visually.

Kevin: Nothing there. But there is the next step and the next complaint that I have.

Etienne Nichols: Okay.

Kevin: And in this one, it's actually not a complaint about overly optimistic startup entrepreneurs. It's actually a complaint about researchers, which is funny. So let's switch over and stop criticizing ourselves and start criticizing...

Etienne Nichols: Yeah.

Kevin: ...Quickly. The next step... So after we've already talked through user needs, product specifications, just the early design controls, pre DV, DV testing. Next let's say that you need to do a clinical test on your device in order for it to get into the... Whether... That could be an IDE or it could be post market. You are planning a clinical study. You will find, and I pretty much guarantee this across disciplines, which is hard to do, but I think it's that deep of a problem, you're going to look for some sort of certainty that if you have the effects, if your device has the effect that you think it will, that you'll have a positive trial. And that is not a true assumption in most cases because there's incomplete information out there. And the challenges there that researchers I think have set us up for is that power analysis has to be based on very careful matching of the actual data elements that you're going to be gathering as your measures of treatment effect. And let me break that down. I hate to use jargon without explaining it. By data element I simply mean a variable that has context. So it has an intervention that was applied on a certain time scale with a certain follow up and it has a certain statistic type. So it's like a mean or a median or a number of patients. So when you are establishing what you want your end points in your clinical trial to be, you cannot guarantee that the researchers in the literature have actually used the same data elements that you want to use. And so your power analysis could be complicated by having to basically say, well, they used the endpoint of shrinking hematoma at 90 days where it gets less than 10 millimeters. We weren't going to use that exact endpoint. How can we change their endpoint enough and try to calculate based on top of it? That is a huge problem that you need to identify. Again, this is one of those where identify early, before you even plan your clinical trial, you need to adjust for the fact that researchers will not necessarily set you up with perfect endpoints in perfect trials that were previously run with the exact population size that you're going to need in order to actually show the effect that you want. So go to the literature early, identify the data elements that you'll want to gather as your safety and efficacy endpoints and see if they're actually reported the way that you think that they should be. If not, you actually need to hop on that immediately because you need to do an exercise in harmonizing the data elements in the literature before you're going to go and assert to the FDA that you perform well, with respect to that data element. If you're going to bring an endpoint to the FDA, it better be a tested in the previous literature and it better be comparable or number one, you're not going to be able to predict whether your power analysis is actually telling you that your treatment effect will be shown or the FDA is going to actually be okay with the way that you are reporting your data compared to other people. So complaint, heterogeneous data elements, not well reported and not necessarily well used in previous trials that you, as a device entrepreneur, as a clinical affairs lead, need to identify early. And really it is your responsibility to harmonize the data in a way that you can use it to inform your trial design and then turn around and use that information to inform regulatory bodies or whoever you're sending that data to.

Etienne Nichols: So when you talk about those researchers, maybe it'd be interesting to hear your story. Did you go to a CRO or how did you go through that? What involvement did you have in setting those tests up?

Kevin: So we used a CRO for actually running the trial and I think they did a tremendous job. And I actually think the CRO industry, I'm really impressed with what people have done in terms of putting together HIPAA compliant databases that can be custom built to your needs and then decommissioned when they're done. I'm very impressed with that. I will say that I think that there are design issues. And I actually found that in my CRO experience, the piece that I don't get enough help with is actually designing my trial. I actually did a ton of the actual protocol writing for my own trial, and I didn't feel particularly qualified to it, but I also didn't feel like the support was there for that whole process of figuring out what, again, comparable interventions were, how did they measure themselves? What were the primary safety and efficacy outcomes of those comparable studies? What was the effect size that they saw? What population were they using? So all those clinical design inputs, not device design inputs, but things that should go into your study design. We had to do that a lot ourselves. I know that GreenLight has offering some other services there. And I think that from my research on this, if I were to do it again, I would definitely do a design where I'm working with smart trials are similar to actually establish that design in advance of building my database. But that wasn't exactly the experience that I had.

Etienne Nichols: And so I'm curious.

Kevin: Yeah. Design is an important part of that.

Etienne Nichols: How did you reach maybe a satisfactory outcome? You said you did a lot of it yourself. Did you work with a doctor or someone else in building that out? Or who do you need on your team, I guess, is my question?

Kevin: I think physicians are definitely a part of that in defining when you're saying," I want to measure safety." It's like," Okay, what are the safety outcomes that you actually need to measure?" It's like, is it major neurological events? Is it death? It might not be right. That might not be the proper outcome for your trial. So going to the doctors to figure out what those risks are, you can also drive them from that literature review that you did earlier in the process when you were doing user needs, isn't that still hanging around? Don't you have that? Can't you just...

Etienne Nichols: Right.

Kevin: Can't look at that and see if those are safety outcomes. So if you've done your homework with the physicians in step one user need, then I think you already have a lot of good information. The other pieces, of course. And I think that bio statisticians are literally worth, I'm not sure what gold is these days, but probably more than their weight in gold in planning these trials or at the very least statistical tools. So again, if there's a group that you can work with that has statistical tools, or that has good bio statisticians, that's going to help you figure out we have our safety outcomes okay, now what's the actual population we need to treat in order to show that we're not actually leading to this risk or that risk.

Etienne Nichols: Yeah. Great. I thought this was really informative. I appreciate you guys coming on. Anything else you want to add as we sign off here, any other thoughts you want to impart?

Kevin: Yeah, I would basically ask any device entrepreneur or anyone who's working in the field of translating user needs to devices. If you are actually having that problem of, I'm not sure what the patient needs, what the physician needs, I'm not sure what the risk should be in my design trace matrix, I'm not sure what the end point should be in my clinical trial. If you have any of those questions, I would say we're actually good people to talk to about that. You can find our information on our site, so go to www.nested-knowledge.com. And we not only have Keith and me as people who can help with that, we have been spending the past three years building a software to help with those exact issues. So you can even jump in and literally go to the site, sign up and you will be jumping in, you'll be led right through PubMed, the way that you would be by a physician or a medical librarian, you'll be able to filter down to the studies that you're interested in, find those risks, and then also have a great support team if you end up having questions along the way. So really just invitation for those who are following in our footsteps, use our experience and also use our software, help yourself out.

Etienne Nichols: Yeah, well, cool. We'll put links in the show notes so that people could find out how to get ahold of you guys and so forth. And so where are you guys off to next in your digital nomad journey?

Keith: I'm leaving tomorrow for Salt Lake. Big fan of Salt Lake, beautiful mountains, you can hike out your back door if you're in downtown, basically it's crazy. Then we're doing Jackson Hole, which Tetons, I think Yellowstone is pretty inaccessible right now, but the Tetons are still gorgeous, Western Wyoming, and then hoping to do Montana as well. But just in general, in terms of it's the accessibility of natural beauty, there's really nothing that beats the American West. You can go from spectacular canyons to mountain forests, to gorges, to world beating peaks. I'm shocked that more people haven't done it when I talk to them about it, haven't seen it. But it's super accessible.

Etienne Nichols: Yeah.

Keith: And you can sleep free in National Forest. So why wouldn't you?

Etienne Nichols: Yeah, Well, I guess I'll add a few more things to my bucket list. Maybe my kid's bucket list so that I can make an excuse there too. So that was good. Great talking to you guys. I really appreciate it. For those of you who've been listening, you've been listening to the Global Medical Device podcast. If you have any questions, feel free to reach out podcast@greenlight.guru or reach out to Keith or Kevin. Look at the show notes and see how you can get a hold of them. Thank you so much. And we will see you all next time.


About the Global Medical Device Podcast:

medical_device_podcast

The Global Medical Device Podcast powered by Greenlight Guru is where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge, direct from some of the world's leading medical device experts and companies.

Like this episode? Subscribe today on iTunes or Spotif

Etienne Nichols is a Medical Device Guru and Mechanical Engineer who loves learning and teaching how systems work together. He has both manufacturing and product development experience, even aiding in the development of combination drug-delivery devices, from startup to Fortune 500 companies and holds a Project...

Search Results for:
    Load More Results