AI vs. Your Brain: Preserving Clinical Expertise in the Age of AI
Loading...
The conversation in healthcare is shifting. It's no longer just about adopting AI, but about the cognitive cost of over-relying on it.
In this episode, our expert panel—Dr. Junaid Kalia, Dr. Harvey Castro, and Dr. Sara Rana—examine the complex relationship between artificial intelligence and human cognition in clinical practice. Drawing on new research, the discussion highlights how dependence on AI can lead to "cognitive debt," potentially diminishing a physician's diagnostic skills, such as the ability to spot critical illnesses like cancer.
While AI is undoubtedly here to stay, this episode provides insights into how healthcare professionals can thoughtfully engage with these smart tools. It emphasizes using AI to streamline administrative tasks and improve workflow efficiency while preserving the essential clinical reasoning that can't be outsourced.
"I think it's a really interesting issue we have to deal with today, and try to figure out how we best use tools that can enhance our knowledge & understanding, and make our lives & processes easier."
- Dr. Sara Rana
Loading...
What You'll Discover:
[00:44] Impact of Over-Reliance on AI: Cognitive Debt & Skill Atrophy
[07:20] Maintaining Clinical Gestalt & Thoughtful Engagement with AI
[11:26] Integrating AI into Clinical Workflow
[14:20] How to Protect Brain Cognition
The future of healthcare isn't about choosing between human and machine—it's a collaboration where technology enhances our capabilities while human judgment and compassion remain at the forefront of patient care.
Referenced in the show:
Transcript
Junaid:
Good morning everyone for another episode of Signal and Symptoms. I'm grateful today to be joined by our experts in the panel. As usual, our favorite Harvey Castro is here and then we are once again blessed with Dr. Sara Rana. Today's topic does involve some of these policy meetings. This all started, essentially I was invited to two grand rounds and go and speak how I'm going to talk about artificial intelligence in healthcare and then I realized like how much do I need to tell them? And then what is going to be the impact of AI on the future generation of medicine.
Junaid:
So first and foremost, we're going to talk about your brain on chat GPT, accumulation of cognitive debt, and using AI assistance for easy writing tasks. Now again, the word is easy writing task over here, but then the whole point is that really how does constant use of AI impact our cognition? I'm just going to start with a very crude example that people who... So they actually put EEGs on, and these are microcap EEGs.
You can have actually up to 21 for clinical purposes. For these kinds of research, we actually have more than 21. It is a micro cap EEG. You hold up a micro. Sometimes you have to use the glue or not. You hook it up. And then the idea is that they measure brain activity. brain activity is alpha, theta, delta, and beta. Again, these are some details. But at the end of the day, alpha means fully awake, et cetera, concentrating. And then you go into these theta that are super concentration and all of that. And the idea really that they realized that A, that they did three people comparison. One without ChatGPT, two with Google search and three with ChatGPT. And then they realized that people who were using ChatGPT had a lower brain activity, lower final ownership of the task. And number two was Google search. And finally, people who were writing personally with their own hands. had the highest amount of brain activity that we can think of. So I'm gonna go ahead and let Harvey open up what his thoughts are initially when he learns about this and then Dr. Rana and then we'll continue the conversation picture
Harvey:
The way I see it as follows and I'll bring it down to just a simple example. When I was going through my training, I was biased in that I would love meeting foreign doctors because my foreign doctors were actually teaching me. Example, I'd go do a physical exam and they would do all these amazing physical exams, very comprehensive. And then, no, I'm not hacking on my colleagues. I know there's some colleagues here that are amazing, but overall I noticed that they had amazing skills. And I would ask them, how the heck are you so freaking good at this? And some of the things, and they're like, Harvey, in my country, we don't get a CAT scan. We don't get these luxuries. So I have to do my best with my hands and my tools. And that really blew me away. And I want to share that analogy because I see the same parallel with this.
If I use AI and it becomes my crutch and it becomes everything, well guess what? When the electricity is out and when I joke with kids and say, we're turning off the wifi, it's like, holy crap. But if I'm using my hands, I'm using my muscles, my skills, then guess what? This muscle up here is not gonna atrophy. But if I depend on it, and this is the crux of this article, and that's why I wanted to kind of start with that story.
Junaid:
So beautifully analogy done. Dependency on technology is not just about AI. It has been going on from MRI machines to electronic health records. And I agree with you. Dr. Rana, your thoughts.
Sara:
Yeah, you know, I think it's really interesting because when we write policies or think about frameworks for AI, we really define the categories as AI assisted or AI generated. So has the AI tool actually made the whole document for you, the whole presentation, or have you just used it as a helping tool to get where you are? And so I think a lot of the times, a lot of people will just rely on AI to do all of their work for them to not even use their brain and muscle to the point where they can get to doing on their own. And it's gotten to the point where people are completely reliant that when chat GBT shuts down for a little bit, or you can't even access the whole world is kind of over. And so I think it's, I mean, it really is, especially for people in younger generations, they can't live without it. I don't know if you guys read the article about chat GBT being a person's boyfriend or girlfriend. It's gotten to that point too now where they're like, instead of even going on dates and meeting people and going in relationships, they'd rather just rely on Chachi PT as their best friend or companion. And so I think it's a really interesting issue we have to deal with today and trying to figure out how we best use tools that can really enhance our knowledge and understanding and make our lives and processes easier while also being mindful of our brain and what we're capable of.
Junaid:
Now, let me put a little bit more spin to the overall document for the overall discussion here. So this is another study that literally, I don't know if it came out this week or not, but this is of course, when new studies suggest using AI makes doctors less skilled at spotting cancer. So the study is in Lancet Neurology and they ended up talking about endoscopists after exposure to artificial intelligence called a multi-center observation study. We found out that continuous exposure to AI might reduce ADR of standard and non-AIS assistive conoscis suggesting a negative effect of endoscopic behavior. I'm just going to be live for preparation of this particular talk, right? I mean, you were right. I I actually ended up asking my different favorite AI and understanding how this is, how it was done, the methodology. Then I understood like band by band analysis, all of this. And I put a lot more information into different AI. And you're absolutely right. I actually did make a presentation with it, Dr. Rana. the idea was that, and by the way, this is Z.ai. We are talking to this company as well. So now two things. Number one, what do you first, both of you get? tell me what does the future look like? When I'm developing an MRI, I'm going to talk to a bunch of medical students about AI. What do we think the future will look like when this is just the beginning? mean, if you know the FDA has approved from now 900 to 1,000 devices, which one of is ours, we're going to have six of us, agree again, understand the same thing. There's a spectrum over here that Harvey suggested. Like we need these for saving lives in the lower income countries as well. So how do we round the square? Because it just seems like a little scary to me.
Harvey:
jump in and say, AI can suggest, but doctors and us healthcare professionals can save lives. I truly think it's the human plus AI working together to do that extra. Example, you look at us right here, we have years of clinical experience and nothing on the medical students or the residents or the new grads, but they don't have those years of experience. So when AI lied to them or is turned off, This thing up here, we have developed, have that clinical gestalt, we have that feeling. And also, I'll throw it out there, that connection, that humanity that's here, that I'm creeping into the camera and there's like that intimacy. I just don't see the robot being able to capture that moment.
Sara:
agree with that, just the overall medical training system as well as all the years of experience you get with that, I don't think technology can replace that. And so you really have to see AI as a tool to get you to your end as a means to get there, but not a fully competent technology that you can't live without. And so I think it's really important to make sure that you give that final check and you give the accurate results and answers and you make sure to to compare against others to really see like, you know, is this accurate? Is it not? And there are so many times I've used so many AI tools in medicine and it's been either wrong or I've had to check through it or change, you know, outputs. I think another thing you touched on when you showed the presentation tool is you fed really good prompts to get to your answer. So it's also important to educate ourselves on really good techniques such as prompt engineering or other ways to really make sure you're using the tool in efficient manner to get the results you want.
Harvey:
Yeah. And I want to add to her point. It's all that again, it's that human, right? It's you, how you found the data, your clinical installed. I always say doctors are better at quote unquote prompting. It's because we speak the language of doctor, you know, and that's why we're better. And then the output we can read it and dissect and then fine tune it and say, no, this is better. And then to your point, the research you did and the way you created the prompting, it's not by accident. You fed a good data, good data in, good data out but then it's again your personality that's in that presentation.
Junaid:
There are three things that be sort of summarized in my head. Number one, we need to have thoughtful engagement with AI. The person who's having thoughtful engagement will always win with the AI and continue to preserve the cognitive load. That's number one, but Harvey Poinsett. Dr. Salaar suggested that number two, those who go bad and then constantly just depend on the AI and will fall into traps of, I don't know, being lonely with AI and they have AI girlfriends. So that's the other extreme. And then my problem also is that I feel, and this is something just a feeling and I'm just talking out loud because this is what this podcast is about, being real people talking about real s***. So while I'm looking through this, although my cognitive power is going more and more, I'm actually getting into developing some of my own agents just to help me be more lazy. And what does the future look like for me is that yes, mean, a lot more data is fed in. Like as you started Harvey, you already have a digital twin. You're just constantly feeding it. And you're gonna at some point in time, you're gonna say, should I ask the real Harvey or the digital Harvey? That's where my problem comes in.
Harvey:
Okay, we'll do it too. The other thing I want to say here is we're having smarter tools, but we also need smarter doctors and end users because my favorite phrase that I'll say it all year is we don't know what we don't know. And having these tools, if we rely on them too much, they may start hallucinating quote line, something on those lines. And then guess what? We're the ones that are signing on the dotted line or saying the things that AI is saying, and then we'd look bad.
Junaid:
Last two questions for each of you and then we'll close. Dr. Rana, how are you using AI to decrease your cognitive load for stupid work and then increasing it and how you're neuro protecting yourself? And then same question to Harvey. Dr. Rana go first. Dr. Castro will answer the question and end the podcast.
Sara:
Yeah, so I see AI as the streamline workflow for me. accessing an email I need to refer to for a new presentation immediately done in like two minutes. I need an AI agent to send my weekly report to my manager, done. I need AI to help. have my framework. how I use my brain is I really use my brain to think about, what do I need from AI to do for me? And I create that framework on with a pen and paper because I find something valuable with that. Then I feed it in using prompt engineering and techniques that are in my brain to really tailor AI to what I need. But you still have to do the verification check at the end. So great, it's doing all this work for you, it gives you something that you need, but then you have to do that verification check. So I think of it like a human loop. You start with what you need, you use this technology to get what you want, and then you get the output out. So in that process, you're still using your brain, but you have to be wary of the fact that you don't fully rely on it. And I always make sure to remind myself of that by creating most of the information and frameworks myself before I use technology.
Harvey:
I need my AI to repeat the question. I'm kidding. How I'm using AI personally, you'll like this approach. I think it's called HUXE Hux. Every morning it's paired to my Gmail, it's paired to my calendar, and it's also paired to the news that I like. I wake up, I go on walks and I say, hey, what's up? And it talks to me and it's like, Harvey, overnight, your inbox had the following emails. This one's important. Please make sure you address. And today you double booked yourself at nine o'clock. Make sure to see which one you need to go to. And then it goes through the news and it says, today AI created blah, blah. And then by the end of it, I'm like, sweet. I got my walk-in. I got that task going. I know which one's to attack. And then to the geeky side, how am I using it to like really improve? I'm working for Phantom Space. I'm the chief AI officer and I run daily reports that tell me what's out there, the competitor analysis, the whole nine yards. And it gives me that in my inbox or in my, and I have different, I have Grock doing it. have Manus doing it. And so then I have this nice report that I looked at it I'm like, okay, this thing is important. I know this new law just passed or this is passing in another part of the world. So that helps. And so, and how am I using it so that I don't mess this brain up? I'm literally shifting some things in my brain. I'm like, If I can't remember numbers, it's okay. But if I'm able to now develop a different part of my brain and you're using neuroplasticity to now augment myself using AI, I'm good.
Harvey:
And then to answer, wow, really Harvey, how are you protecting yourself? As I do, I make sure that when I come up with a solution of a problem, that it's me, Harvey, my clinical gestalt, my 40 companies that I started, all of that put together. And then I put it into AI so that I come up with an idea and I'm using human plus AI. And then I know you said you wanted me to end it.
Harvey:
I really want to say AI is here to stay. But the real paradox is that these tools are getting smarter. And as they get smarter, we need to become smarter with them because we don't want to be subservient to these tools. We don't want to make it our crutch. We want it to be the opposite. These are our tools and I can reach out to these, but it's not the other way around. The tools reaching out to me.
The second is in healthcare, cannot let AI steal the very thing that patients trust the most and it's judgment. It's that compassion. It's that combined. So I really want to emphasize it's using both, but knowing how to use it.
Learn more about the work we do
Dr. Junaid Kalia, Neurocritical Care Specialist & Founder of Savelife.AI
🔗 Website
📹 YouTube
Dr. Harvey Castro, ER Physician, #DrGPT™
🔗 Website
Edward Marx, CEO, Advisor
🔗 Website
© 2025 Signal & Symptoms. All rights reserved.