Why Healthcare Can't Close the AI Implementation Gap (And What to Do About It)
Loading...
The question isn't whether AI works in clinical practice—it's about dismantling the bottlenecks that keep proven innovations trapped in academic papers instead of integrated into daily care.
In this episode, our panel of experts cuts through the noise to ask: What's actually preventing healthcare from closing the implementation gap—and what can we do moving forward? Because we need more healthcare professionals who not only recognize AI's potential but also implement it in a way that improves patient outcomes.
Hear from three voices who bring this challenge to life: a clinician-founder who's navigated the frustration of having evidence without approval, a clinician and international AI advisor offering global insight on building trust and navigating governance, and a healthcare executive confronting the daily tension between operational demands and strategic transformation. Through peer-to-peer dialogue, they challenge conventional wisdom about innovation hierarchies, experimentation, and what leadership truly entails in the age of AI.
"If you're a leader and you're not finding the time to remove yourself from that grind and think about the future and how do I bring in AI to my organization, you are in the wrong role."
- Edward Marx
Loading...
Loading...
What You'll Discover:
[00:00] Intro
[00:58] The Seven-Year Evidence Gap
[02:53] Why Culture (not Capability) is the Real Barrier
[04:22] The Operational Trap: Why There's No Time to be Strategic
[06:22] How Transparency Builds Trust in AI
[08:08] Why Fear of Failure and Over-Governance Kill Innovation
[09:02] Innovation Hierarchy: The Danger of Hiring Another Chief
[10:40] How to Remove Yourself from the Grind and Think about the Future
[11:21] Board-Level Education: The Need for Tech-Advanced Leadership
[13:40] Strategic Framework: A 5-Question for C-suite’s AI Readiness
Most importantly, you'll walk away with a five-question evaluation framework that CMOs, CFOs, and CEOs can use immediately to assess AI readiness and drive meaningful implementation.
This isn't just theory—it diagnoses the real barriers and prescribes practical solutions.
Referenced in the show:
Transcript
Dr. Junaid Kalia;
Good morning everyone. I'm very excited for today's episode because we're going to talk about how clinical evidence is going to be synthesized in the new age of artificial intelligence. As usual, very grateful to our experts here Ed Marx who always talks about how to bravely go where no one has gone before. When I see him, I feel about Star Trek. And then the idea is essentially adding new forms of intelligence and then how to really implement it. His guidance is always important for practical purposes. And then of course, Harvey, who brings in not only US experience, but international experience of artificial intelligence implementation. Now we're going to set up what we're going to discuss today, accelerating clinical evidence synthesis with large language models. And we truly believe that clinical evidence is far behind in the normal thing. So let me set the stage. stroke physician, know, the main medication was TPA. Believe it or not, NIH approved that through FDA, I think to only 250 patients or so. That's it. That was annoying. And then we realized it is way more better what the clinical trial results were. Then we got a new medication called TNK, Teniplectis. Now that Teniplectis is a bolus, is, know, the practicality was amazing, but still to this day, there is no clinical guidelines that are supporting it properly. I mean, they just gave an update of the results. So it took them seven years to actually update the guidelines. And then in the meanwhile, in some of these cases, in the first two to three years, people like us were living, you know, making sure we had to disclose to patients that it is not FDA approved in the sense of this exact indication, but we have significant evidence, blah, blah, blah. And therefore, I want to give it to you. There was an additional nuisance that we had to do. And then finally, somebody did the study, then that evidence was approved by FDA, and then they put it into the drug indication. Remember, drugs has indications of use as well. So three issues. Number one, we who are doing clinical work are producing evidence, period. What we want to do is we want to use that clinical evidence on a daily basis to create actual publishable results and then more importantly, effect policy change. So this article goes into an amazing way of how we can go ahead and do that. So first, Harvey can set this stage. Why do we lack so much in clinical evidence? And then add can set the stage why from a CIO, CTO perspective, what is the current sort of bottlenecks, and then we can go into possible solutions according to this paper. So, Hari, go ahead.
Dr. Harvey Castro:
Yeah, so let's just start, we'll just break this down for people that are not in this top top space, like you and I are physicians and we see it a little different. See, evidence is moving really fast, but medicine is lagging behind. And as a result, all this information is being thrown at us, professional, healthcare professionals, but how do we discern? Is this something good? And what he just said is that, if this medicine evidence comes in quickly, but we're not acting on our patients for years later because the guidelines then the first question he asked me is, what's going on, Harvey? Well, I keep saying this a lot, but I really think part of it is culture, right? We are taught in medical school and all our professions that there's certain steps. And when we get there, we're looking at evidence-based medicine. But the problem is there's so much data coming at us that there's no way a human being can read all of the data and then implement. And also there's data that's coming at us that might be bogus, then it might not be correct. So there's that objectivity. So… If we act too fast, we may actually hurt our patients, because maybe that evidence is, quote unquote, not correct or it's not proven, or it's biased by let's just say a pharmacy company that's really trying to push that. So now we have to discern. And so how do we fix that problem? Well, it goes into this article. So I'll let you expand and I'd love to see what Ed has to say.
Edward Marx:
So one, and Harvey, touched on it as kind of a cultural issue, and that's about governance. I know we're gonna go deeper, dive sometime, and we have in the past, but we'll keep doing it on governance, because that seems to be where there's a big bottleneck that happens. Another one is that we're so focused on operations, there's no time to be strategic. Whether you're a practicing clinician, do you have time to go try to figure these things out? Even though it'll save you time in the long run, but there still is that curve that you have to go through, and it's the same getting super practical now on the hospital side, on the provider side, where I'm responsible for running a huge operation and I have all these people responsibilities as well. I don't have time for AI, although we all know, as I just said with the clinician example, that if we invest at the time, we would gain back that time in spades. But that hump is so big that we don't take it. We don't invest the time. So as a result, What are we concerned about every day? And as I share this, I just wanna remind everyone, this is not just my thinking and my experience, but I literally, like this week, I have three dinners with tables of 10 CIOs, or people that report to them, and CNIOs and CMIOs. I do this every week. I talk to people, hear their cries, I hear their frustrations there's just so focused on operations and you can't blame them. There's a lot of pressure. You have to take a lot of costs out of the system. Everyone's dealing with costs. How do I take out costs? Where do you find the time to really focus on, wow, we can really improve clinical quality and not just clinical. know, we were very clinically focused on the podcast, but obviously there's many, many benefits operationally in terms of running an organization with AI. So that's sort of the conundrum. It's like, how do you… cut through all of this and really take advantage. And that's, you know, I have some ideas around that, but I've gone on a long date. I've gone on a long rant here. So I'm going to stop, but certainly we can get practical. Like what do you do to break this bottleneck?
Dr. Junaid Kalia:
So now we're going to start back with Harvey and then we're to come to governance. How would you change or influence the governance and explain to the whole body that why this evidence is in this generation of longitudinal adaptive summary for medical assessment is important in a way to be applied rigorously. And then of course, we're going to come to Ed again and then he's going to say, OK, I'm going to make this policy tomorrow.
Dr. Harvey Castro:
Yeah, this is a really good question. really, and I say this often, I really think it's all about education, right? If I come to you and you have zero idea how AI works, are you going to trust me? If I have an MD by the state of Texas and I walk into the room and I'm your doctor, do you trust me? The answer is yes, because I've been vetted, because it's transparent. You can trace, I went to med school residency and experience, and then you can actually go online and look at my reviews, and then you can even go to the Texas board and verify. With AI, can you do the same? And so as a speaker and the question you ask is about being transparent, saying, look, here's this black box. Here's how it works. Hey, this is the chain of thought. It's in the article. It explains how I got to that position. And so by being able to do that, now there's that trust. And once that trust is there, now it's better. Example, if I show you a data set for the AI and it says, this is the population that it serves, and you an astute doctor, you're like, wait, wait, wait. This data doesn't show this other population that is not part of this study and how are you gonna apply this to our hospital community? Good job. Again, it goes back to educating you. Now you know what you don't know. Now you know what to ask. And the better we educate, I call it the IQ of AI elevating it. And as it elevates, now there's more trust. And as there more trust, now we can start implementing certain things more and more.
Edward Marx:
So it's everything Harvey just talked about, but also you need to allow for safe experimentation. Let people go, let them try things, empower them to do things. Otherwise nothing gets done. It's okay to experiment because that's the only way we're going to advance. But people are afraid of failure. So goes back to the culture thing we've talked about a lot, Harvey already mentioned it. It's like you got to create this culture where you can take risks, you can experiment and it's okay to take time and think about crazy things and you go and do it. So that's super important. Here's one final one, because we've touched on this a lot. So as much as I believe in governance, really important, that's what kills us is too much governance. And that's why I said allow for risk, allow for experimentation. You over-govern things and that's why healthcare lags.
Dr. Junaid Kalia:
Now I want to talk about that from a pure clinical basis that for example, how do you mandate because that I'm just going to be honest with you, right? Because the last thing that Ed said is that find time. Do you really think somebody's going to find time until it's mandated? So now the question is that one thing is to actually hire a chief innovation officer that's $1.5 million per year and then he's going to do this and he's going to have a demand which is going to be another $10 million team and then let's just be honest that's not happening.
Edward Marx:
I think the worst thing, and I love all these people in these roles, it's not anything directed against them. The worst thing you can do is hire another chief. All that does is create more bureaucracy, more obstacles. I mean, it sounds good, but man, it's what, I think we wrote about it in this book called Healthcare Digital Transformation in 2020. If there's one thing that has kept us from transforming organizations with tech, including AI, It's way too many chiefs. It just doesn't work, man. I've been part of it. I've been in the mix. It does not work. What does work, and here's the thing, because you're right, Junaid, know, everyone's too busy, so are they going to do it? I think you just, you need to hire the right person. If you've got a CIO, CDIO, CDO, whatever you want to call them, and they are not doing that, then you got the wrong person. I figured it out. I figured it out. didn't have to make a 30 % rule, although I did encourage that. I'm like, man, try to figure out, and I talked to my team about, try to figure out how to let some of your people go. And we had these couple of people who were like these great creative thinkers. They were like analysts. And I said, carve them out. Do not give them tickets or things to work out. Let them just like put them in a room alone, you know, those kind of people and let them do their stuff. And they did. They came up with the most crazy creative things, loved it. We helped save people's lives as a result. So you can do that. So that's super practical, but going back to leadership, it all rises and falls in leadership. And if you're a leader and you're not finding the time to remove yourself from that grind and think about the future and how do I bring an AI to my organization, you are in the wrong role. I think it's that simple. The other thing I wanted to mention as well is you have to think about the board levels as well in these organizations and the boards are typically made up of people who do not understand technology. I'm speaking in generalities here. They are wonderful people. They are the smartest people. They are amazingly connected. I have nothing about positive things to say, but they do not know technology. And so they are responsible for the overall direction. So think about it. If we're in the age of AI and technology, in your most senior part of your organization is not using AI themselves. They're not tech advanced. They're not going to know to push on the organization like, what are we doing? How are we leveraging? How are we shrinking that time gap? How are we leveraging AI to save lives? Give us some examples.
Dr. Harvey Castro:
You know, I'm blessed that I was in a leadership role, that I was able to start a company from zero employees. We grew it to 400. But part of that success, or all of the success, was the team that was around me, the leaders around me. But more importantly, we made sure to pick leaders that led by example. Case in point, I would walk into the emergency room, and if I saw that bathroom dirty, I wouldn't say a word. I'd just go clean that toilet. And then it spread through the company that, my god. the top founder of the doctors cleaning toilets. And it sets at the precedent that doctors and everybody was like, no, this is important for this organization. I know that sounds minor in the AI world to Ed's point. We need to educate our leaders, educate our C-suite, our board members so that they become the leaders, so that everybody understands it. And so number one is that passion that you're hearing in my voice is because at the end of the day, it's not about the doctors and everything else. To me, it's about the patient. What can we do to make the best experience for that patient? And to me, I truly believe, you look at my TED Talks, all four, it's that passion to serve, that passion to give, and that is what is making me successful. Now, am I looking for a paycheck? I'm not. It is truly, how can we do this? How can I educate my leaders so that they can leverage so that one day that organization saves a life?
Junaid:
Now, of course, I'm not going to talk about save life AI, but we are Hydra cybersecurity framework. We have different connections. We have workflow enhancements. We truly calculate soft ROI and hard ROI. And then, of course, that's how we put implementation. We provide an implementation plan because we want to make sure it is a journey, not a destination. We have multimodal capabilities, and that's why we are championing AI. But again, this is another small area that I wanted to make sure that people have. what are your five questions? Make it simple. What five questions do they need to answer? The key value is, what five questions does these people need to answer? What five questions does Chief Medical need to offer? What five questions the Chief Financial Officer and Chief Needs to answer? And what other like CEO needs to answer in terms of strategic partnerships and excellence? Make it biteable. Everyone has a problem with time. Make sure that you use everyone's time appropriately and then you can literally score them, partnership score. And this will provide a significant amount of what? Trust. That is the key point, right? That's what we keep saying about that trust is so important. And once you develop a strategy and a framework like this, and you are scoring them for everyone, then what happens is that that's what exactly Harvey said from the beginning, that it develops what we call trust. So again, now I have to add a few things. now griddle me guys. I'll let Ed start.
Edward Marx:
Yeah, I think everyone that saw you put up that framework there, those first few slides, they're going to want copies of those slides because it's really good. You've taken a lot of what people are struggling with and put it in a nice framework. So I was recently, I've said this before, but just for listeners, might bop in and out. I spent a lot of time in hospitals every week. And last week I was at a, again, I remain generic on purpose because I never… explicitly ask people for permission to talk about their particular institution. But it was a very high end academic medical center and they presented, they had a very nice structure like you're talking about. And almost everyone else in that room who were from hospitals in that city, they were all like, I need a copy of those slides. Because most people don't have the framework articulated. So bravo to you, Junaid, for articulating a great framework. So I like it. I believe in it and I love the whole partnership versus vendor mentality and the honesty that yeah, it's not always gonna be roses, nothing is. But it's funny, we have that expectation though when we work with vendors and partners, it's gotta be perfect. What, are you kidding me? Show me one other relationship in the life that's perfect.
Dr. Junaid Kalia:
Thanks Ed, I really appreciate you sitting through this and then of course Harvey you're going to Ohio and I think you'll be using some of this anyways.
Dr. Harvey Castro
Yeah, no, I definitely will. I gotta make sure you give me a copy and we'll definitely talk. From my feedback, here's the beauty, right? It takes a village. You coming to listen, you're part of our village. We wanna educate, but we're not educating. You're actually teaching us as well. It goes both ways. As you tell us your comments, your discussions and say, hey, my institute has XY, it gives us a minute to think about it be like, okay, I hadn't thought of that factor. So now that know that this is a fun village I know that on Tuesdays we will release a summary of this and it'll be condensed and it'll be on our Substack. So we always forget to say that, but make sure you follow us on Substack. Make sure you follow us on YouTube. Make sure you follow us individually as well. It really helps build our community because by you recognizing us, then others will be like, wait, these guys know XYZ, maybe I should follow them. And at the end of the day, now that you've heard us, you know that our heart is to serve and to educate and to elevate this AI in healthcare.
Edward Marx:
We're always open to having a guest, and so reach out to Junaid, and we would love to hear your voice as well. So thank you for listening and being part of the show.
Learn more about the work we do
Dr. Junaid Kalia, Neurocritical Care Specialist & Founder of Savelife.AI
🔗 Website
📹 YouTube
Dr. Harvey Castro, ER Physician, #DrGPT™
🔗 Website
Edward Marx, CEO, Advisor
🔗 Website
© 2025 Signal & Symptoms. All rights reserved.