In a today's conversation on the Global Medical Device Podcast, host Etienne Nichols engages with Dr. Maria Nyakern on the burgeoning integration of AI in medical devices.
They delve into the European AI Act, discussing its impact on innovation and the critical role of trustworthy systems in clinical research. The episode also touches upon the challenges and opportunities AI presents in MedTech, from data integrity to job displacement, and the human-centric approach to technology.
Interested in sponsoring an episode? Click here to learn more!
Listen now:
Like this episode? Subscribe today on iTunes or Spotify.
Key Takeaways:
- Increased integration of AI in clinical research.
- The rise of wearable health monitors.
- Advancements in surgical robotics technology.
- Stay updated on AI regulations.
- Invest in trustworthy AI systems.
- Leverage AI for cost-effective data generation.
-
AI will play a larger role in personalized medicine.
-
There will be a push for global AI data sharing.
-
Expect AI to drive faster, more accurate diagnostics.
Key Timestamps:
-
[00:00:00] Introduction to the podcast and topic of AI in medical devices
-
[00:03:25] Greenlight Guru's streamlined product development for MedTech
-
[00:05:15] Dr. Maria Nyakern's background and entrance into AI
-
[00:07:30] Discussion on the European AI Act and its significance
-
[00:13:45] Comparison of US and European approaches to AI regulation
-
[00:18:55] The intersection of MedTech experience and AI governance
-
[00:23:10] The importance of data quality and integrity in AI-driven clinical research
-
[00:29:00] The human aspect in AI development and clinical trials
-
[00:35:10] Ethical considerations and the future of AI in MedTech
Links:
Memorable quotes:
- "AI systems...will be more cost-effective and less of a hurdle for companies to generate data sets for subsets of patients." - Maria Nyakern
- "Trustworthy AI systems must be worthy of humans." - Maria Nyakern
- "Embrace innovation with Greenlight Guru." - Etienne Nichols
Questions for the Audience:
- Which AI-driven MedTech advancement excites you the most, and why?
- How do you envision healthcare changing in the next decade with the rise of MedTech innovations?"
We value your thoughts - if you enjoyed this episode, would like to give feedback, or have suggestions for future episodes, please email us at podcast@greenlight.guru
Sponsors:
Today's episode is brought to you by Greenlight Guru, the only MedTech-exclusive Quality Management Software designed with the direct input of industry insiders. Dealing with cumbersome product development cycles? Facing the maze of regulatory compliance? Greenlight Guru is your guiding light. Our comprehensive software suite not only streamlines your processes but also helps prepare for the complexities of FDA and ISO compliance with ease.
Built-in quality management ensures your focus stays on innovation and speed to market, while our real-time dashboards provide clear visibility into every stage of the product lifecycle. It's time to accelerate your path to market, minimize risk, and release products with confidence.
And for our Global Medical Device Podcast listeners, Greenlight Guru is offering an exclusive, guided walkthrough of their groundbreaking platform. Transform the way you bring devices to market and stay ahead of the MedTech curve. Visit greenlight.guru today and mention this podcast for your special offer. Embrace progress, with Greenlight Guru
Transcript
Etienne Nichols: Hey everyone. Welcome back to the Global Medical Device Podcast. My name is Etienne Nichols. I'm the host of today's episode. Today, I'm really excited to have Maria Nyakern with us. Dr. Nyakern has experience in the medical device industry as a clinical research scientist and entrepreneur across the spectrum of early stage startups to large multinational organizations. In 2017, Maria founded Acre and Scientific Consulting, a European medical device CRO.
As CEO, she led a 50 plus team of consultants supporting manufacturers in the regulatory and clinical development of medical device technology. ÅKRN Scientific Consulting was acquired by NAMSA in 2022, and since then, Maria has sort of developed an interest in AI, which kind of leads us to the topic that we're going to be discussing today, and I'll let you kind of speak to how that interest has developed. But today we're going to be talking about AI and how it affects medical devices and kind of just the entire geopolitical world that is surrounding AI as it pertains to medical devices.
So, Maria, how are you doing? And welcome to the show.
Maria Nyakern: Thank you. Thank you so much for inviting me to this show. I'm really excited to be here today to speak to you about this very relevant topic. And today is the 12th of December.
Not sure I can say that, but we recently had some really relevant updates on the field of AI regulation, so I'm excited to share that with you and a bit how I see the regulation around AI with a specific focus on medical devices, which is, of course, my topic of interest.
Etienne Nichols: Perfect.
We don't talk about the date very often, but my traceability experts out there will now be able to determine whether, how long it takes for me to edit and get an episode out.
So that's good. So, the thing to me, you mentioned the timeliness of this topic. It really is timely because was it last week that the AI act was released? Why don't we talk a little bit about that?
Maria Nyakern: Yeah. So, you know, it's basically, let me scroll back a little bit in time. In 2019, we started to see a lot of artificial intelligence software as medical devices appear on our radar and clients that needed help to bring this through the regulatory framework in Europe.
And at that time, it seems incredible now, it's four years ago, but there was a guideline we know we were stumbling by because, of course, this fell under the medical device regulation law, the medical device law in Europe.
But in addition to that, it was the release of a guideline from a European perspective. FDA had something similar. We always discussed what the topic was at the moment, which was that we needed to develop trustworthy AI systems. And this word trustworthy, I bring that up because from a European perspective, it contains two really important terms.
It's the trust, and that you have to be able to trust these systems and that they have to be worthy of humans. So, I think that now that we have seen the AI act develop over at least since 2021, when the European Commission first proposed this legislative text, and then it goes through the parliament and the council, it's like three political stakeholders that have to agree on this.
It has been continued to refine and developed, but this word in itself, trustworthy, keeps on appearing, and it's a really important component of this AI act. And then finally, now last Friday, they agreed on it and people celebrate it, or they think that it's the death of innovation in Europe.
It really depends on which camp you are in.
Etienne Nichols: And I guess we can leave our opinions aside, but do you know what are some of the reasons someone might be feeling like it's the death of innovation?
Maria Nyakern: Well, I think it comes to a bit how tech industry and how innovation and how an economic model is developed in us versus Europe. We know that the European policymakers are a bit more cautious and regulates, rather than being perhaps more reactive and dealing with things as they appear in the US.
But that doesn't just relate to AI system, it's tech in general. Of course, we know that, and it reflects the different cultures between the two continents and the way that we do business.
I think that I work in compliance and regulations. Of course, I am the wrong person to criticize any type of regulation, and especially the AI Act. I think that it's a very good beginning.
It's like as the medical device regulation also developed, and it stemmed from problems during the 2010, due to different scandals with safety on medical devices, this AI act has developed.
I see so much similarities, of course, with the medical device regulations. I think it's a pretty solid good start how this is going to affect potentially negatively innovation.
I think, as with the medical device regulation manufactures and develops, they have to adapt and do as good of a do and do a good job to understand the regulation and the legislations and comply with it sooner rather than later.
So of course, if you enter into late in the game trying to get your product on the market without having understood what this means early on in the process, of course it's going to be much more difficult.
And then, similarly to what we have seen with the MDR, of course there was a blockage there due to resources. The pandemic came in the middle. There was a lack of resources at the notified budget.
So, there were a couple of roadblocks that probably made manufacturers and the industry quite frustrated with the medical device regulation. But the regulation in itself, I don't see that it was something negative. On the contrary, and I think it's the same for the AI act also. I think it's a good start, and I'm perhaps extending a bit, so you have to cut me there.
Etienne Nichols: No, you're totally fine.
It's interesting because I look back at why in 2023 did AI burst onto the scene suddenly, and we could pontificate about that and probably not really pinpoint it. It's probably a lot of different things, but at this point, it almost feels like different countries want to be the best to handle it.
Whether that's the European AI act or the US now has come out with the executive order, I believe that was October.
Well, actually, let me look it up. October 30, the executive order on how the US will handle artificial intelligence. And in some ways, it almost seems like a space race to get to that, to handle the technology before it becomes the AGI, the artificial general intelligence. The idea that this AI can have human cognitive abilities to handle unfamiliar tasks, and so how are we going to handle that with a medical device itself?
One of the things that the US came out with was the predetermined change control plan,
which I think some people may be stretching to be more applicable to handle changes in software than it really is meant to be. But I'm wondering, have you seen something like that or similar to that with the EU legislation around the regulation of medical devices?
Are you familiar with that? And it's okay if not, I can.
Maria Nyakern: The predetermined, is it change core? Is this something related to minimizing bias and having robust data?
Etienne Nichols: Yeah, there's a couple of things, yeah. So, the predetermined change, they came up with their acronyms, because we all love our acronyms, PCCP. The idea is if you have an AI or ML machine learning device, then, you know, some things may change.
So, the algorithm has a certain amount of change in the future. So, what you are submitting currently is going to be on your submission, but you also submit a predetermined change control plan with the understanding that this is how we anticipate our device to change.
And so, we're going ahead and submitting that as well right now, versus doing an internal letter to file later. This is more of a, hey, this is how we anticipate. These are the certain parts of our algorithm that are unlocked. So that's the idea there. It's not as extreme as we are going to change and diagnose the different.
We're going to have different indications for use. It's not as wide open as a lot of people seem to act, but that's kind of the idea. Certain changes, yeah.
Maria Nyakern: No, but it's true that in terms of maintaining compliance and technical documentation, this act is still not final.
The final regulatory text needs to come into place. And then of course, I assume that there's going to be also some guidance documents around that where there are gaps, actually, how do we fulfill the requirements?
But it's true that a big part of it is that you say that we would need to understand how may this continue to develop and sort of do some type of gap analysis on where do we believe that this went and what's actually happening in the post market setting, because one of the things that's really similar between the regulation of AI systems and medical devices, two complex technologies, two high risk complex technologies. That's why the regulatory framework is so similar in a lot of aspects.
It's also the focus on what happens. It's not just entering to the market, but what happens in the post market setting. And we all know, at least you and I know, that a big shift in the lifecycle development and the mindset of manufacturers of medical devices was that you needed to have a much clearer focus.
What happens in the post market setting? It's not just a question of getting your pre-market approval and getting your CE mark in the case of medical devices in Europe, but it's also doing this follow up along the cycle of your product.
And this is also a very strong component in regulation of AI systems. It's minimizing, of course, data bias. It's making sure that the systems are fair and that they respect data privacy policies.
And similar to in a post market setting where we are obliged to collect data on incidents, adverse events or incidents, there's going to be this also type of oversight on the AI systems and reported in a publicly available database, just as the same way we do with devices.
How is this oversight going to be managed? In the case, of course, medical devices, it's notified bodies. In the case of AI systems, they speak now about an AI office or an AI board.
I mean, these are just terms for the time being, but I think we have now a two-year grace period where I guess the exact date in 2025 when everybody will have to have adopted to this.
I don't know what that date is yet, but they're speaking about a two-year grace period.
Etienne Nichols: Okay, so I'm going to kind of go off into the opinion land for you for just a moment, so feel free to punt and just ignore this question and move on to a different one if you like.
We talked a little bit about before we got on how we sort of get off track. So, I don't mean to get us too far off track, but you mentioned something that I thought was really good in that the AI and medical device, the AI regulation is similar to medical device regulation for these different reasons, and you kind of separated them a little bit because that really is a good way to look at it. I like this categorical thinking. And this AI is a technology that has its own regulation.
Medical devices is a technology that has its own regulation, and you bring those together as an AI medical device. Now you're kind of playing in two worlds. So do you see AI professionals becoming a bigger need into the medical device world or are you going to see medical device.
I mean, this is a hard to predict.
Maria Nyakern: I see that our competences are MedTech professionals, people with experience on medical devices, clinical and regulatory experience. We have highly transferable knowledge going into AI governance. Absolutely. I mean, one of the, you should know that, again, I'm speaking about the European regulatory landscape, one of the most regulated products in the, you know, beyond pharma. Okay. But speaking about C mark, and it's of course, medical devices. So having understood that regulatory framework, it was very clear that whenever you develop a regulatory framework related to another high risk, complex technology, it would be foolish to not build on your knowledge around the MDR, especially where we came from, from a directive that had holes in it and that created products that of course our politicians and our policymakers needed to deal with. But we had products on the market that were not safe and that did not perform as intended.
So, I think the policymakers bring now with them a wealth of knowledge and a culture of how they would like to develop regulations in Europe that the European audience feels comfortable with.
And you always know that there is this constant conflict, what the industry wants and what the policymakers want to protect the safety and the well-being of the citizens of the European Union.
So, since I myself come from a long background of medical device training and regulation, I found it quite natural to just step into this new, brave new world of AI development and how to adjust and adapt to the regulatory framework that is going to be put on us.
And I think that there's a tremendous opportunity for a lot of people working in this ecosystem, MedTech, to sort of brush off their knowledge, take some additional courses, make sure that they understand the fundamentals of what AI means, and definitely keep a focus on this new AI act and see how they could become one of these spokespersons and consultants or business opportunities.
The opportunities are endless, especially for people and folks that come from MedTech. They won't find it so difficult to create some new space for themselves in AI governance, especially, of course, when it comes to medical devices.
But I think for any of the high-risk AI systems, I think they will be able to adapt pretty easily.
Etienne Nichols: I love that. It's very encouraging because a lot of people I've talked about are intimidated a little bit by AI, maybe the technology itself, but honestly, like you said, you have that background.
Those are transferable skills, so there's no reason to be intimidated. It's a brave new world, and you have my respect for really diving into that. So, if we kind of come full circle, your background is pretty heavy in medical devices and clinical investigations. So, what if we tie this together? What specific aspects do you think MedTech professionals need to be thinking about as these AI regulations come to play?
I don't know how we want to attack this, whether if you're developing an AI device or if you're just in general developing a device, but AI can help you do that.
What's your advice? And going either direction, choose one first.
Maria Nyakern: Yeah, no, I think where I would like to start is, of course, data quality and data integrity, because having spent so much time in clinical research and being involved, let's say, in the last phase of the development of a medical device, which the clinical investigation is sort of the culmination of when you have done your risk assessment and the risk management and whatever risk you cannot mitigate for or explain, of course you have to bring your product into a clinical trial or clinical investigation to make sure that you can mitigate for that risk and prove that your device or your product is safe and performs as intended.
So, part of the development of clinical investigations has always been the quality of your data and the integrity of your data. So, designing a clinical investigation is also a question of mitigating for bias.
And we do this in science in different ways. You know, it's. It's whether we look at male versus female or age or ethnic.
What do you call it? Ethnic.
Etienne Nichols: Ethnicity, yeah, ethnicity, yeah.
Maria Nyakern: So, it's becoming more relevant, of course, than ever compared now. When we are looking at clinical investigations of AI systems and also the data privacy, it's also becoming a fundamental critical factor also to take into consideration working with clinical data, medical data, patients.
Of course, privacy is a huge thing, but I think we are at the same place as yesterday, but multiplied by 1000, whatever we need to pay attention to yesterday, we have to pay attention to tomorrow, but multiply it by 1000, because also the systems that we have created, they are getting more and more, of course, vulnerable to attacks, because as all this technology is growing, we have a lot more data available and we share all this data. That's why we are able to develop, let's say, AI systems to begin with, because there's so much data available and having all of this huge amount of data and being able to analyze, it's of course, one of the strengths of an AI system, but of course, those systems are vulnerable to data privacy attacks.
And also, I would say, the data integrity in general.
Etienne Nichols: Yeah. That's a good point. So, when we look at this from a product versus a process standpoint, it's interesting because if we stay on the process side, so we're using AI to help us analyze that data, I see a lot of different opportunities for that.
What are your thoughts on the product? If the AI is the product that we're starting to produce, and we're going to go through a clinical investigation with an AI software as a medical device, are there specific issues that we should be thinking about or thoughts or considerations you might have on how to fully investigate that as it pertains to the AI versus anything else, or is it really just another medical device, or are there anything specific that you think that we need to be considering as we put an AI through a clinical investigation?
Maria Nyakern: I think, of course, something that we had never, as far as I could think about, we haven't really been considering before, how we keep the human in the process,
because the humans were always in the process before any type of tool that we have been working with before, whether it's a hands-on device or a software, it was always a traditional type of tool, where, of course, everything we did was human centric and humans were always kept in the loop.
Now, this is sort of an add on that we really have to be conscious and build that in and make sure that these systems are foul proof in terms of having human decision making in the entire clinical setting.
And I think also it will be important to.
Now I got lost.
Etienne Nichols: No, it kind of makes me go back to that word that you used at the very beginning, that word trust, because I never really thought about that before. Everybody has a slightly different perception of the facts that they are presented with based on your background, based on your experiences.
And so, to be able to trust each other, one person told me, and maybe this is relevant, maybe it's not. You can tell me, but one person told me, he's like, we can't all have a common understanding if we don't have a common background, because people talk about common knowledge and how it's not very common anymore.
Well, yes, common sense, that's the word. Common sense isn't common anymore, but it's because we don't all have a common background. So, using an AI to test something versus using an AI to be tested, they both present different challenges.
And so, yeah, I think that's kind of what we were talking about.
Maria Nyakern: Yeah, it's remarkable.
I think yesterday I saw that these large language models, they've learned so many things that we don't know that they have picked up on the way, but for example, they are less efficient in providing you answers to Chat GPT, for example, in December compared to May, because it has learned that in December productivity tends to go down, while in May it's one of the most productive months of the year.
So just by telling the Chachi PD, for example, we are in May, the output increases, while it knows the date that we are in December, the output decreases. So of course, the amount of bias that we have in clinical research, a huge thing. We already know that all the clinical data that we have collected on average, is from a white male that's 65 years old and from the western world.
So just going into, you can imagine everything that now our language, large language models are trained on, of course, is biased towards a certain set of viewpoints, a certain set of ideas, a certain set of what is considered common sense.
But as you said, there's of course, no absolute context of what is common sense for everybody.
Etienne Nichols: Yeah, I've sort of latched onto the idea or the wording of Ais versus AI, because we tend to look at AI as just one big thing. A lot of the world seems to just artificial intelligence.
It is one thing when really we have ais, we have different sets of data that we're using, and this is purely a speculation on my part, so you keep me honest here, but when I think about the bias in clinical investigations or clinical trials, that's been a big push for the FDA on the US to really eliminate that bias and to increase diversity in those clinical investigations, will we get to a point where we'll be even more specific? Rather than having a big set of data across all different sets of diversity, ethnicity, nationality, all that you mentioned there, will we get more focused and then have very specific clinical investigations for this specific set, so that you could have assigned different data sets based on ethnicity, nationality, almost going the other direction of widening the scope, rather than just narrowing the scope and going deep.
Maria Nyakern: I think there are two things to that. One, in a perfect world, we would go the direction where you are sort of indicating if we would have an unlimited set of data. Of course, we would not have to generate every time a new clinical investigation. Imagine we would be able to access all the data that's available and have an analysis, and not every manufacturer repeats what we have done a thousand times before.
So that's of course, an ideal perspective. However, to get to that, we need all of this variety of data, and we don't have that today. So how are we going to do I mean, we would need to continue because the models are only as good as the data that is fed with.
And the more we are like one year post launch of Chat GPT now, and when the word AI ends up on everybody's lips and during this year, again, don't quote me on the exact data, but I think that we are seeing already like 30% of what we see on the web is already AI generated. And the further we go with this; it's like you're biting your own tail.
The model itself becomes corrupted by not having access to original data, but more and more of just AI generated data. Of course, in this sense, clinical research, the area that we're in, it's going to be so relevant to continue and generate original data, because that's, of course, what's going to get us out of this bias situation that we are in and how we will do that, because that's the same way that we've been speaking about this for at least a decade, that the lack of data related to the female population or to minorities, it's a big handicap in clinical research with these systems and AI system. That, of course, is going to facilitate clinical research.
It doesn't mean that we're not going to do research, but it will be cheaper to do. It will be more cost effective. It will be less of a hurdle for companies to go in and say that they're going to generate these data sets, these original data sets, for a subset of patients that may not have been possible to do before when running clinical research without these AI tools. I don't know if that makes you.
Etienne Nichols: No, it does. It's actually really exciting to hear you say that, because when you were first started talking, you talked about all of the thousands of different clinical trials that are going on.
Potentially, if that data was available to everybody, maybe you could reduce that number quite a bit.
Like you said, it would be a perfect world to have an isolated set of data for this particular class of person, or not class, but type of person, whatever you want to define that as, and then provide that to every medical device company. That's a really exciting possibility. I don't know how to accomplish that at the moment, but I guess we'll have to revisit, I start to say in ten years, but at the rate things are going, maybe in one year we'll be closer to whatever it is.
Maria Nyakern: Yeah.
I want to come back to the perspective that a lot of people, of course, are scared about job displacement and that there's no place for humans anymore in this new AI world. But I think that if you read the text of the AI act that we started to speak about at the beginning of our call and how noble the role of the human is sort of integrated.
I don't know how to say that.
Etienne Nichols: Yeah, no, we're going to be going much faster than we ever had before, but we should still be at the helm. I don't know exactly what you're trying to say.
Maria Nyakern: Yeah, it is. I think that I have to think about the quote that Einstein said. We have created a society, that it's something with intuition.
Etienne Nichols: The intuitive mind is a sacred gift, and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.
Maria Nyakern: Exactly. So, one of the things with the new system that Google launches, Gemini, which says that now it can reason, and over the past few weeks, it's like, okay, when are they going to be able to reason these systems?
And that's what they say now that the Google Gemini has the capacity to do. But beyond reasoning, there's so much more to this equation where I believe that humans have a special place, of course, in this ecosystem. And perhaps it is what this quote from Einstein said, that the intuitive mind, how was it?
Etienne Nichols: The sacred gift and the rational mind is a faithful servant.
Maria Nyakern: Faithful servant. We have created a society that honors.
Etienne Nichols: The servant and has forgotten the gift.
Maria Nyakern: Yeah.
So perhaps that's something that I think we should be mindful of and building into that. It's remembering that this is a human centric technology.
It will be humans that will use it. Humans will ultimately need to control it somehow. I think the fact that we are facing this new type of coexistence with artificial intelligence, and by the way, we're using this word, I don't even know if artificial intelligence is such a great word that describes it, because then you would immediately think about what is natural intelligence, then are we natural intelligence as in artificial flavoring versus natural flavoring, artificial light versus natural light?
It sort of just already gives, like, a negative connotation and makes you skeptical, because we are skeptical to everything that we consider artificial. And perhaps that's already something where we feel a little bit, it puts it in a negative light when it really shouldn't have to be that way.
Etienne Nichols: Right.
I like that you brought out that quote, too, because it does make me think there does seem something intuitively different about an artificial intelligence, even if it gets to the point where it reasons.
Like, Tim and I have seen some of those videos as well, versus a human.
I look at them if you were to compare side by side, the human has been trained on a certain set of data. Right. I have my experiences. Grew up in Oklahoma, in the US, been different all around the world, but not to extend as someone else.
But then there's this AI that is to the point. Does it know things that I don't know? Well, sure. But is there a human element that maybe we fully haven't teased out yet, that there's something specific about that?
I don't know. We're starting to get into some philosophical realm, which is hard for it. It's irresistible almost, for me. So, I got to have to pull myself back. We'll definitely put the links to all of the references we mentioned, the quote, the AI act, the predetermined change control plan. If anybody's interested in learning more about that, and then the executive order from the US, maybe at some point, as they get to be rolled out a little bit more in detail, we can maybe do a side by side comparison of these two and how things are going to differ from the US and the EU a little bit more specifically.
But what are your thoughts, any last thoughts for our listeners?
Maria Nyakern: Yeah, I do actually have one last. It's so exciting because compared to tech in general, where females tend to be fifth or 20% or something like that, I see that there are so many interesting women taking on leading roles in AI development, and especially from the US, I think it's so exciting.
And we have some really great role models that I think both young and older professionals can look up to these great women. And I think that's also beyond this as a technology,
which I think has the potential of solving extremely complicated problems and democratizing a lot of different areas, anything from research to entrepreneurship. The fact that also we have such a new wave of female leaders in this tech area, tech space, I think it's really exciting.
And just as a last comment, there was I think in the New York Times two weeks ago, they had a really big article about the leaders in AI. And remarkably enough, almost none of these women were mentioned there.
And that was sort of a sized in the social media. But I think we have the chance also to keep on pushing forward and highlight these female leaders.
Etienne Nichols: Yeah, that's great. Where can people find you if they want to reach out and talk to you directly?
Maria Nyakern: They can find me on LinkedIn or through my personal web page.
Etienne Nichols: Sounds good. We'll put links to those in the show notes. In the show notes. I may have cut you off. What's the name of the web page?
Maria Nyakern: It's my last name. Nyacon.com. Okay, perfect to find me.
Etienne Nichols: All right, thank you so much. Maria. Really enjoyed the conversation, and I look forward to maybe revisiting next year or having more conversations in the future. I really appreciate it.
Maria Nyakern: Definitely. Anytime. Thanks so much.
Etienne Nichols: All right, everybody, take care.
Thank you so much for listening. If you enjoyed this episode, can I ask a special favor from you? Can you leave us a review on iTunes? I know most of us have never done that before, but if you're listening on the phone, look at the iTunes app. Scroll down to the bottom where it says leave a review. It's actually really easy. Same thing with computer. Just look for that. Leave a review button. This helps others find us and it lets us know how we're doing. Also, I'd personally love to hear from you on LinkedIn. Reach out to me. I read and respond to every message because hearing your feedback is the only way I'm going to get better. Thanks again for listening, and we'll see you next time.
About the Global Medical Device Podcast:
The Global Medical Device Podcast powered by Greenlight Guru is where today's brightest minds in the medical device industry go to get their most useful and actionable insider knowledge, direct from some of the world's leading medical device experts and companies.
Etienne Nichols is the Head of Industry Insights & Education at Greenlight Guru. As a Mechanical Engineer and Medical Device Guru, he specializes in simplifying complex ideas, teaching system integration, and connecting industry leaders. While hosting the Global Medical Device Podcast, Etienne has led over 200...