The Wild West of Healthcare AI: Innovation, Regulation, and the Paralysis of Caution
Loading...
Healthcare AI is in its Wild West phase—messy, undefined, and full of potential.
In this episode, our panel takes a step back to ask: where are we on this path to innovation, and how do we prepare for what's next?
The conversation explores where healthcare AI sits on the hype cycle. While some are just discovering what AI can do, others are already navigating the trough of disillusionment. Our hosts discuss why innovation appears messy right now, how that's actually normal, and what the barriers to adoption reveal about the future trajectory.
"There’s no way to control the mess. So, how do you allow innovation to just happen, while not stifling it and not causing harm?"
- Edward Marx
Loading...
Loading...
What You'll Discover:
[00:00] Introduction
[03:33] Why controlling the 'mess' too much stifles progress?
[06:41] What is the current stage of Healthcare AI in the Hype Cycle?
[10:09] Who will pay for AI?
[13:42] How to create a safe space for experimentation?
From the struggle to legally define what AI even is, to the careful balance hospital leaders must strike between enabling innovation and ensuring patient safety—this episode is about distinguishing value from hype. Join our panel of experts for a peer-to-peer reality check on where we are, where we're headed, and how to navigate the uncertainty in between.
Referenced in the show:
Transcript
Dr. Junaid:
Good morning everyone, grateful that you joined us. Today is gonna be a lighter episode and we wanted to basically understand the whole trajectory and sort of summarize for you guys like what is really expected of artificial intelligence going on. One of the things, we are talking about this today's topic, which is gonna be what is the current status of healthcare AI and how do we move forward? And more importantly, how do we prepare the future generations given that, again, if you see one of the last episodes we did is that unfortunately, people who positions were using significant amount of AI is at risk of losing skills. Another thing is algorithmic concern, new era of AI calls for new workforce of position, algorithmic specialist. So this is another big point that we want to discuss in today's sort of exploration is that where is human in loop sort of design comes in? Where does, you know, the algorithm ends, the physician starts, or is it all the way through what should be the level of autonomy that we should expect in future as far as this is concerned? And then of course, using different AIs, I prepared this sort of a little bit of, know, giving you the numbers and stuff. And then lastly, that we want this from this is again, the signal, so that that's the word. Then making sure that we don't, we create value, concentrate on the signal part of the, not the noise part, and then avoid the noise and the hype. And then of course, the symptom part, bringing always back to the patient because everything else gets discussed from finance to fundraising to everything. And then we just literally the symptom part of it to focus on how do we bring this back to the patient. So I'm going to ask Harvey to start, speak up on what his sort of expectation is from the current status. How does he see the future? And then more importantly, Harvey brings in a You know, swiss knife, but Ed actually truly tells us where the hospital system, CIOs, CTOs are sitting and we move forward from there. Go ahead, Harvey.
Dr. Harvey:
Well, a couple of things. You mentioned the retinal scan, and I have to mention something. Being an advisor for the country of Singapore, I'm really excited what they're doing with retinal scans. Number one, being able to have that type of information, and my favorite phrase is working backwards. Now they'll be able to literally have the entire population. So knowing their genome, knowing their retinal scan, now they'll be able to put those databases together and start finding new things that the retinal scan will take in them. For example, you mentioned cholesterolemia. There's other risk factors. I just thought that was interesting. The second thing that you mentioned that I want people to be aware, with HIPAA and the GDPR, we are so worried about giving our patient identifiers. What people don't realize is a simple EKG can be traced back to the original person. Even if you take away all the identifiers, there other identifiers now. Chest X-rays, the same thing with AI. Now we're able to really go back and say, it's actually this particular person or this particular race it'll be interesting how things will change in the future. Now, going back to this particular conversation, I'll say this again, we don't know what we don't know. And that's why I'm excited people listening and we have people like Ed and you here to understand and see different angles. So I'll let Ed kind of continue, and then I'll pick it back from Ed.
Edward Marx:
Yeah, it's so wild, wild west right now, right with AI and that's the way innovation happens. Like if you trace back innovation and other industries and with other sort of technologies, it's very natural that we get a hold of something new and we want to try and see how it works. And I think we're all doing that on a personal level. Like I know I'm doing it. I'm trying different things all the time. You both are great examples for me. I'm really pushing myself. I'll come back and answer the question, but it just took me to another thought, which is I was recently with some professor in Mexico and he was challenging us on AI. Like, are we really embracing AI? Have we as a person embraced AI? And I thought, yeah, of course I do. But then when he said, well, give me examples of how something used to take you X amount of hours and now it takes you X minutes. And I was really hard-pressed to give him some valid examples. I'm trying to incorporate that now into my everyday life, actually get deeper in it. That's the way it happens. I think we're seeing the same thing right now. We get this tool, so now answering the question. We have this great tool and you're seeing, like you showed us, Junaid, all these examples and you could just go to AI and find thousands of examples every week of another health system, another physician, clinician who's experimenting and trying new things. And that's just the way that innovation happens, but it's messy. And I think there's no way to control the mess. And I think if you control the mess too much, that's when you stifle innovation. So there's this careful balance. So from a hospital perspective, know, as a CXO or board member, it's like, how do you allow innovation to just happen while not stifling it and not causing harm? So it's a careful balance that we've touched on this probably every episode, you this whole concept of governance and things like that. It's just a perfect, you have to be very careful to get the right perfect balance. And that is allowing innovation, allowing these sorts of things to happen, allowing people to experiment and at the same time having some level of controls, but I'm advocate for not too much that just makes sure no one goes off the deep end.
Dr. Junaid:
You're absolutely right. I'm going to come back to governance regulation. As a matter of fact, I actually talked to three experts in the last week or so from a legal perspective, that how do you even define AI in the healthcare perspective from a legal and interestingly, the answers were we don't know how to define AI from a legal perspective, right? mean, Junaid Kalia is a person entity, whatever. How do you actually define it? It is a big mess. Add to your point you are right on the spot as usual. And you're going to figure out that we're going to talk about this, maybe not in this episode, but next episode, corporate alignment, and then of course, regulation. And EU Act is, in my opinion, one of the worst acts that can happen in terms of innovation stifling. But again, how do you control this fine line between regulation and this? So this is what we call the hype cycle So technology triggers inflated expectations, trough of disillusionment. This was last year and then so of enlightenment and then the finally, know, epic is coming back, you know, things are available, cheaper, etc. Now, the question is, that's what I think we are here. What do you guys think? Where are we? Current status.
Dr. Harvey:
I'm going to be a little controversial here and say, it depends on what vertical and where you particularly are in your lifespan. So you may have just learned about AI and chat to be team. You're like, oh my God, this hung the moon. This is crazy for my particular vertical and how I see the world. We are here at the top. And some other parts, if you're in a different mind space and you've seen it, you've been like, oh yeah, I was there. Couple of years ago now I'm actually at the trough. So I personally think it depends on your personal journey.
Edward Marx:
Yeah, and I'm going to answer the question. This, Janay, is part of the hard part, you know, having Harvey and I as co-hosts, you know, we're not compliant, super compliant. Before I answer the question, I did want to say, and Harvey made the point earlier, but it's, I think it's really important to understand, we're going to see the biggest jumps. It actually messes with this question as well. We're going to see the biggest jumps in outside of the West with AI. We saw it with other technologies. We can certainly learn some things in the past. In India, there's not a lot of telephone lines. In the United States, you go to all these towns and cities and you see all these above ground telephone lines that were so costly to put up and a lot of them were fraught with danger just the way they were engineered. But what happened in India, they skipped that whole investment period and went straight to cell phone, right? And everyone has a mobile and more people have, I saw a stat one time, more people have access to mobile than clean water. So there was this jump and I think we're going to see the same in healthcare and AI. We're going to see this big jump in areas outside the West and then we're going to learn in the West from them. So it's going to be really interesting to see what happens. So where are we in the hype cycle? Yeah, it depends geographically, like to Harvey's point, not just by industry verticals or you personally, but also geographically where we might be. I think we're still early stages. I still think, you know, it's been said already, we don't know what we don't know. I think we're still early stages. So on that hype cycle, you know, I think we're starting to come up, but very, just in the beginning part, after the turn from disillusionment or whatever, I think we're just starting to come up. The only problem with the graph on the hype cycle is that that upswing is going to go beyond the x-axis. Would it be the x-axis? It's going to go way, way past that.
Dr. Junaid:
Now, we talked about AI adoption is still, both of you think that we are very early stages. I think we are a little far along, but now we are talking about the biggest barriers are governance, regulation, and then the last point with medical legal. And then we talked about how to actually improve adoption through medical legal more so than regulation, or both, I suppose. Then we come to last Who's going to pay for it? I'm going to tell you three examples. Saudi has 70 % market from national Qatar, similar. EU, the whole thing is government, right? I mean, as a matter of fact, who's actually paying for healthcare is government, except for niche markets like India, Pakistan, where there's essentially haves and have nots. And then of course, we come to amazing, superb, beautiful United States of America which has, I believe, one of the best balances between both public good and commercials. So I want to take this conversation from both of you asking your opinion that how do you think who's going to pay for it, for AI and healthcare, going to evolve? What is the current status? mean, anyone who could say about it and then how is it going to evolve in the future?
Dr. Harvey:
I can foresee, obviously we were talking about this, right? Each region of the country, each position will be different. I foresee a tax coming and I think not just healthcare. see eventually there's going to be a tax for the companies that are using AI the most and they'll have to pay a certain tax to help for some of the displacement that will occur. With that, also see now that you're mentioning it in healthcare, I do see a somewhat of a tax now in my talk in the United States just saying generically, countries have to pay for it. know, Ed will tell us very meticulously that this stuff is expensive. The tokens will add up. electric bill will go up as well. So that's kind of how I see it happening. Ed, I'm curious what you think.
Edward Marx:
Wow, I'm not sure. I think it may continue just the way that it is done today, but again, the way that healthcare financing works today very imperfectly. I think it is gonna drive more medical tourism. So I think people aren't gonna wanna pay extra or a premium or anything like that. We're already seeing you know, the success of medical tourism in different areas, you know, lot of it's plastic related or, know, not care critical, like life critical, although there's some of that, right? Because if I have a cancer that's only treated a certain way, well, this is where we're headed for sure. This already happens today, but I think we're going to see it multiplied. People are going to pay out, so answering your question, people are going to pay out of pocket for things that are going to be helpful to them and it's probably going to be outside of the West. So that there'll be more medical tourism because if I know that I could go to Ecuador and they're leveraging AI or I'll give you a real example that I've been toying with regarding my knee is in Buenos Aires, right? In Argentina, where there's a lot of it, there's advances all around the world, but certainly there and with some certain treatments for my knee and things where they're leveraging AI, they're doing things. Some of it's non-AI, but it's things that aren't allowable in the United States by our insurance companies, but it's definitely show, the studies are showing that it actually works and it's just better overall. So I think we're gonna see more of that as the United States, as we struggle of all the things that we just talking about, the regulations, the forms, protections, we're gonna struggle how to do it, how to finance it. And there's gonna be other countries who are like taking advantage of situation and providing us certain people an off ramp to go take advantage of AI-driven care.
Dr. Junaid:
So we as founders, in your expert opinions, how do we navigate this maze until we find clarity?
Edward Marx:
So I think one thing you can do, I'm on the board of some health systems, I've been asked to speak to a couple others. And what I am suggesting is creating a sandbox, a playbox. we sort of did this with, again, we could take some of the things from the past and learn from them, like when we're doing EHRs and things like that. We created, there was a big concern with EHRs and my gosh, we're gonna kill people. so oftentimes we would create a sandbox, kind of a safe place for people to experiment, to innovate before we release it. So instead of saying, no, you can't do anything, and then people are stifled and they don't do anything and we don't take advantage of advanced care capabilities, when you create a safe environment, let people play. And then as they gain confidence and they can prove things out using scientific methodologies, then you can release it. So don't stifle them, because you're going to stifle innovation and people are going to be like, okay, well, I make a pretty decent living. I'm just going to keep doing how I do them. I mean, this happens all the time. So if you create an AI sandbox, for lack of a better term right now, and allow your clinical staff, whomever, administrative staff, to have a safe place where they can play, experiment, innovate, and then as we gain confidence, again, I'm not suggesting that we allow people to experiment on live patient, but if we experiment in our sandboxes, create this environment, we will have a process and then let them go and flourish.
Dr. Harvey:
With that, thank you so much for joining us. And then I just have to make this comment. We got two doctors and on the middle we got Ed squishing the poor guy. So thanks for joining us. Thanks for putting up with us, Ed. And it's been so much fun, man. Thank you.
Learn more about the work we do
Dr. Junaid Kalia, Neurocritical Care Specialist & Founder of Savelife.AI
🔗 Website
📹 YouTube
Dr. Harvey Castro, ER Physician, #DrGPT™
🔗 Website
Edward Marx, CEO, Advisor
🔗 Website
© 2025 Signal & Symptoms. All rights reserved.