The Art of Medicine with Dr. Andrew Wilner

AI in the Doctor's Office with Marvix.AI CoFounder Rashie Jain

Andrew Wilner, MD Season 1 Episode 144

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:07

Dr. Wilner would love your feedback! Click here to send a text! Thanks!

Many thanks to Rashie Jain for joining me on this episode of The Art of Medicine with Dr. Andrew Wilner! 

 

Rashie is an engineer and Co-Founder of Marvix.AI, her second start-up. Rashie observed that many physicians struggle with high administrative burdens, especially medical specialists who spend more time with patients and deal with complex cases. With the advent of large language models, she created an "ambient scribe" that takes notes during a patient encounter, organizes them, and presents them for review as a finished product. With just a little tweaking, doctors can embed these notes into the electronic medical record (EMR).

 

I tried out Rashie's software at the recent American Academy of Neurology meeting in San Diego, CA. Her Co-Founder played the role of a migraine patient, and we chatted for about 10 minutes. Truth be told, the ambient scribe did a great job capturing the essential details. I could have edited it in just a couple of minutes, which would save time compared to typing it into the EMR myself!

 

To learn more about Marvix.AI, or to try it in your own office, please contact Rashie Jain at https://www.marvixapp.ai

#AI #ambientscribe #largelanguagemodel #womenentrepreneur

Please click "Fanmail" and share your feedback!

If you enjoy an episode, please share with friends and colleagues. "The Art of Medicine with Dr. Andrew Wilner" is now available on Alexa! Just say, "Play podcast The Art of Medicine with Dr. Andrew Wilner!" 

To never miss a program, subscribe at www.andrewwilner.com

Follow me on Instagram: @andrewwilnermd

X: @drwilner

linkedin.com/in/drwilner 


Please rate and review each episode. 

To contact Dr. Wilner or to join the mailing list: www.andrewwilner.com

This production has been made possible in part by support from “The Art of Medicine's” wonderful sponsor, Locumstory.com, a resource where providers can get real, unbiased answers about locum tenens. If you are interested in locum tenens, or considering a new full-time position, please go to Locumstory.com.


Or paste this link into your browser:

https://locumstory.com/?source=DSP_directbuy_drwilnerpodcast...

[Andrew Wilner, MD] (0:08 - 1:18)

Welcome to the Art of Medicine, the program that explores the arts, business, and clinical aspects of the practice of medicine. I'm your host, Dr. Andrew Wilner. Today I'm pleased to welcome Rashi Jain.

 

Rashi is an engineer and the founder of Marvix AI, an AI-powered scribe designed for the complex discussions between patients and their neurologist. In a minute, we're going to discuss the pros and cons of having an artificial intelligence program take notes during a patient encounter versus a human scribe versus the doctor doing it all him or herself. But first, a word from our sponsor, locumstory.com.

 

Locumstory.com is a free, unbiased educational resource about locum tenants. It's not an agency. LocumStory answers your questions on their website, podcasts, webinars, videos, and they even have a locums 101 crash course.

 

Learn about locums and get insights from real life physicians, PAs, and NPs at locumstory.com. And now to my guest, welcome Rashi Jain.

 

[Rashie Jain] (1:18 - 1:20)

Thanks. Thanks, Dr. Wilner for having me here.

 

[Andrew Wilner, MD] (1:21 - 3:07)

Yeah, thank you. You know, I was recently at the American Academy of Neurology meeting in San Diego in April and your company, Marvix AI, had a booth and I talked to a really personable young man who insisted I do a demo and see. And so we made believe that he was a patient with a migraine and he must have done this before because he knew all the symptoms.

 

And so I treated him like he was a real patient. And in the background, ambient, right? In the background, he had the AI recorder going.

 

We must have gone on for probably about 10 minutes to kind of get migraine, the history, very, very important in migraine. Not much to find on physical exam. It's all about the story.

 

You know, nausea, vomiting, photophobia, phonophobia, do your parents have it? So we went through the whole thing and we say, OK, we're done. Physical exam's normal.

 

Let's just skip that. And then press the button and the thing whirred around for a few seconds and then boom, there was this giant report. So I looked it over and frankly, it was pretty good.

 

But I did have some misgivings about it and I'll share those. But first, let's talk about you and how you decided to follow this path to make something that hopefully will assist physicians in the constant battle of trying to be efficient in a challenging workplace. OK, so what's your background?

 

[Rashie Jain] (3:08 - 4:40)

Yeah. So Dr. Wilner, I'm an engineer by training and now I've spent over a decade in health care. So this is my second startup.

 

Before founding Marvix, I founded a care management company for cancer patients called Onco.com. We ran that successfully for eight years, got acquired by a hospital. And then having been in health care for so long, having been a founder for so long, I decided to take the plunge again and this time wanted to build Marvix because both me and my co-founder, we felt that one of the most important problems in health care today is just how do we reduce administrative burden for physicians and clinicians?

 

We're seeing a lot of time being spent in documentation, in coding, and a lot of these administrative tasks, honestly, that have no place in patient care. And the burnout is real. It's even more pronounced in specialties where the workflows are more complex, consults are more, consults are longer.

 

So we are very excited about the advent of generative AI. I think we're looking at a really interesting time in health care software. And we firmly believe that we're going to see a lot of these AI native applications become mainstream in health care software and hopefully remove the admin out of these health care workflows.

 

And so, yeah, so that was our vision. That's why our journey started and here we are. And that young gentleman you're talking about is actually my co-founder.

 

So yeah, so that's a little bit about us.

 

[Andrew Wilner, MD] (4:40 - 5:16)

Yeah, he was very, very helpful and knew the product inside and out and kind of anticipated my questions. So I felt it was a very worthwhile session. I'm glad I stopped at the booth.

 

Okay. Now, I'm a neurologist, of course, and that's why I was at the American Academy of Neurology meeting for maybe the 25th time, I think, over the years. So why do, why make a, I mean, neurologists are, you know, there's only about 15,000 neurologists in the country, so it's not a big market.

 

Why choose neurology to make this AI program?

 

[Rashie Jain] (5:17 - 7:19)

Yeah, so when we founded Marvix, we were clear that we wanted to go after specialists because the workflows are more complex. The need is real and acute and neurology seemed like a good vertical to get into because we noticed a couple of things which were unique to neurology workflows. The first was that no neurologist has a standard note requirement.

 

It can vary depending on their super specialization. It can vary depending on the disease scenarios that they're encountering. For example, they may have a very different requirement for a dementia patient versus an epilepsy patient and so on and so forth.

 

The second thing that we noticed was that usually they have multi-user workflows. So they would essentially be working with a team of medical assistants, nurse practitioners, APPs, and really collaborating on a clinical note, which we felt was an unsolved problem. You had a few tools that a physician could use, but essentially they would just allow some kind of recording one-on-one, which really didn't adapt to their workflows.

 

And the third thing that we noticed was that the complexity of the consoles was real. There was a requirement to capture medical terminology with high precision. There was some contextual understanding that the AI needed to have.

 

For example, if the physician says the patient presents with weakness, weakness could mean many things in the context of neurology or subspecialty within neurology. So long story short, we felt that this is a really complex problem to solve, not something that some of the standard AI assistants that exist in the market have sought for. And we felt that this is a really good vertical to get into.

 

If we can create a playbook here, if we can get some early success here, there would be an opportunity for us then to take this across other specialties, which may be less complex than neurology. So that's how the whole journey started.

 

[Andrew Wilner, MD] (7:20 - 9:36)

Wow. So I was getting kind of mentally fatigued just listening to all of the challenges of trying to put this thing together, even as a neurologist, everything is very complicated. So it's interesting you chose sort of, you know, that's like climbing Mount Everest before you're going to do the little mountains.

 

It's like, well, let's just pick the hardest one and we'll do that one. So good for you. You know, way back when, when all of a sudden there was sort of word processing and templates came out, I was very excited about using these.

 

But as you point out, you know, with a neurology patient, it's very hard to anticipate which way this is going to go. And your line of questioning may start out with, oh yeah, I understand your arm is weak. And then the person says, oh yeah, but then I had this terrible headache and then I couldn't walk.

 

So you're not just talking about the arm anymore. You're, and you may be thinking of a totally different, first you thought they had a stroke because their arm is weak, but now you're thinking that hemiplegic migraine because their arm is weak because it had happened 10 times before. So very hard to fit all that into a template.

 

I actually did create a, using when office came out, a database because I'm an epileptologist and I had hundreds, if not thousands of patients that I put on the template. You know, how old were you when it started? Anyone in the family history?

 

A lot of questions I could do. Yes, no, yes, no, yes, no. When was your last MRI?

 

Okay. You know, 1995, what did it show? You know, with sort of bullet points, because in those days we had paper charts.

 

So just going through the chart to find the basic database was very, very hard. So that did turn out to be a time saver. More recently, I've been kind of discouraged with AI.

 

You know, I mean, my biggest AI exposures was Siri, right? And you ask Siri a question, you know, is it going to rain tomorrow? And they'll say, you know, tomorrow is the 10th of May.

 

And it's like, well, what I wanted to know is, you know, is it raining or not? You know, just all kinds of nonsense to me. And then, you know, it misspells words.

 

And I will deliberately spell a fancy word one way and it unspells it and does it the other way. And it's like, gee, this is a total waste of time. So why is your product not a total waste of time?

 

[Rashie Jain] (9:37 - 11:30)

Oh, that's a great question. Quite a loaded one, actually. So the way I think about it is, Dr. Villeneuve, that, you know, you have, we're living in a really interesting time where over the last couple of years, a lot of advancement has happened in the underlying technology, by which I mean large language models are really superior today. And they can perform a lot of tasks that one couldn't even imagine as recent as three years ago. But I think a product that caters to the complexities of healthcare, within that specialty care, neurology care that we are solving for, requires layers of nuances built on top of this underlying technology of large language models. The software needs to adapt to these complex workflows, have to give you the ability to do multiple LLM calls, have a combination of different feature sets that all come together to form that rich context in which you want to generate the output for the end user.

 

So that's, I think, a really hard engineering problem. It is not a one-size-fits-all approach. So the product that you're creating is not some generic, let's say, meeting summarizer, standard AI copilot that you could potentially use in maybe a standard meeting setting, but would fail miserably in a healthcare setting.

 

So we were very clear that this is going to be a vertical stack solution that is going to deeply integrate with workflows of the providers we're going to serve. And of course, the underlying technologies, the LLM model technology, today I feel as a commodity, it'll only get better. And that's not the game we want to play.

 

We want to talk about the end user experience, which is also a very, very complex problem to solve in the healthcare setting. That makes sense.

 

[Andrew Wilner, MD] (11:30 - 12:51)

Yeah, many years ago when Dragon first came out, the early dictation software, I had broken my hand and I left to type and I couldn't type. So I got Dragon. And it took me like three hours to dictate one page because, you know, correct this, go back, correct this.

 

And it would do all kinds of crazy stuff. And over time, Dragon has improved to the point where, but for medical use, which is what I was using, you needed a special option, right? There was the medical version, because that's got all the fancy medical words.

 

And of course, radiologists use their radiology version now, I think with a pretty high degree of accuracy, you know, in part because usually what they say is more or less the same, you know, choice of vocabulary, like clinical correlation required, that's going to be there. You know, it's probably a macro, right? And, but they tend to use the same words.

 

So I know you've really, you've gone past the development stage. Your product is actually in use, right? In a number of physician neurologist offices.

 

So we call this an ambient scribe, right? It's just, you just turn on, I guess, your phone. Is it an app in your phone?

 

Is that how it works?

 

[Rashie Jain] (12:52 - 13:12)

We're device agnostic. So we provide apps on phone, iPad. You can even use it on your laptop because we have a web app.

 

And all across all devices, the app would sync instantaneously. So you would actually start your recording on your phone and then process the note on the web. It seamlessly syncs across devices.

 

[Andrew Wilner, MD] (13:13 - 13:19)

So you just turn it on and then you have your patient encounter normally. Is that right?

 

[Rashie Jain] (13:20 - 14:20)

Yes. So you turn it on. The app will ambiently record your conversation with the patient.

 

You can forget about the device. You can forget about the app. You can have a normal conversation.

 

You can talk about, for example, I don't know, a football match that you saw last night. It doesn't matter. The AI is smart enough to pick up the clinically relevant information from that really unstructured conversation that you have with your patient and then generate a finished clinical note in a template format of your preference.

 

And it's designed to pick up medically relevant facts from across the transcript. So you could be talking about, let's say, a physical exam at the very beginning of your consult. And maybe towards the end of your 90-minute long consult, you may mention or note some other aspect which should tie into the physical exam.

 

So the AI is smart enough to pick up, literally, call those important clinically relevant facts from different parts of the transcript and weave it into a structured note.

 

[Andrew Wilner, MD] (14:21 - 15:41)

Well, that's pretty impressive. You know, I was initially thinking that I didn't like it because I like to take notes, my own notes. But this device does not say, I can't do that.

 

I can still take my own notes. But it's going to save me the trouble of putting them all together at the end. And of course, I'm sure it's editable if something the AI says isn't quite exactly what I wanted it to say.

 

You know, I had this discussion with somebody at the meeting, though, is sometimes I don't really know what the bottom line is until I process it myself. In other words, taking the notes and saying, well, this is a 29-year-old woman with headaches. And sometimes her arm gets numb and weak and family history.

 

And it's like, oh, she has migraine. In other words, I don't figure it out. So by sort of having this device figure it out for me or do all the processing, I don't want to miss that step.

 

So I'm just wondering, you know, I wouldn't want to sort of steal my thought process, particularly for people in training. You know, but it's a great, seems like a great backup for people in training, but probably not the way to start out. Would you agree with that?

 

[Rashie Jain] (15:41 - 16:22)

I would. So we work with a few groups, a few academic hospitals where residents use our product. And it's really interesting how they use it.

 

So they usually work in tandem with the attendings. And for a cohort of cases, they would use the AI. They would use MaRigs.

 

And then they would also do notes without the AI. And then they would go back and learn from the notes, the AI created to actually train themselves in comprehensive note-taking creation. Because I guess that's a very important part of the full medical training.

 

So yeah, so it can actually also be used as a training tool for residents to make sure you create comprehensive notes and you learn that skill early on.

 

[Andrew Wilner, MD] (16:23 - 16:43)

Oh, that's very interesting. You know, sometimes I have a student who writes a very good note and I'll tell the other students, hey, you know, look at George's note. You know, that was a good job.

 

But you know, it's always sort of fraught with like, I don't want them to think I'm, you know, favoritizing George, but you know, just so happens he did a good job. But I could say, look at the AI note.

 

[Rashie Jain] (16:44 - 16:47)

Yeah, it's your most diligent student.

 

[Andrew Wilner, MD] (16:47 - 16:49)

And it doesn't get tired.

 

[Rashie Jain] (16:50 - 17:18)

It doesn't get tired. I oftentimes tell our providers that think of it as the persona of this AI is that of your most diligent assistant who will perform with the same accuracy no matter what time of the day, even at 3 a.m. in the night. They'll give the same output as they would at 10 a.m. in the morning and they will never fail you. But yeah, they are your assistant. They're not here to replace you. They're here to like give you some free time, I guess.

 

[Andrew Wilner, MD] (17:19 - 17:34)

What about, what has been the response? You know, there's this HIPAA thing and patient privacy. How do patients accept having a computer listening in on their intimate personal details?

 

[Rashie Jain] (17:35 - 18:25)

Right, so our software is obviously completely HIPAA compliant, stored on HIPAA compliant infrastructure. Data is end-to-end encrypted and we've taken into consideration, put in place the strictest of security standards. So all that's obviously in place.

 

Now, with regards to the patient response, surprisingly, this has been positive. So we always encourage our providers to get the consent from their patients, do full disclosures that they'll be using an AI tool to help them write their notes. But patients tend to like it broadly and the reason is that there's no laptop in the room.

 

Providers are actually able to focus more on patient care and so it overall improves the patient experience. So we've not really seen, honestly, any resistance to this from the patient community so far.

 

[Andrew Wilner, MD] (18:26 - 20:47)

Well, I'll just interject an anecdote. Way back when I used to see patients in private practice in my office, the epilepsy patients, I would sit there and would tell the story and I'd review their chart and take their notes and then I'd let them go and then I would dictate a summary. And that could take five or 10 minutes to think about it and sort of put it into words.

 

And I remember getting dragged into my manager's office that patients were complaining that I didn't spend enough time with them. I was too quick because I had the information that I needed. I had reviewed the chart ahead of time, which was time they didn't see.

 

And then I was dictating after the visit, which was also time they didn't see. So I had a brilliant idea. I said, I know what I'm going to do.

 

I'm going to dictate while they're still there. Oh, really? And to me, that sort of felt a little bit rude in that I was taking up their time to do sort of administrative work.

 

So I just started doing that. I had my little cassette dictator thing, a dictaphone. I don't know if dictaphone has survived, but dictaphone and I would dictate.

 

And I could usually dictate pretty quickly. I mean, I can do a whole long AI kind of report in a few minutes. And every now and then it was like, I would stop and ask the patient to clarify something.

 

So it was that two weeks ago or two months ago, I don't remember. And they would tell me and I could put it in. And universally, the patients were impressed that I had captured all of their details.

 

And I had really listened and put everything into a note. And that's not something I expected. As I say, I kind of resisted doing that.

 

But because I was getting criticized for not putting patient time in, when I really was putting the time into the patient, it just was time they didn't see. I decided to do that. And then I never stopped doing that.

 

I always did it. And I found that dictating in front of the patient actually showed them that you had listened and that you had a plan. And it was a very effective tool that I never would have guessed.

 

So I think patients knowing that somebody or something in this case is actually taking notes, maybe that's a positive thing.

 

[Rashie Jain] (20:48 - 20:52)

Yeah, I agree. And that's exactly been our experience as well.

 

[Andrew Wilner, MD] (20:53 - 21:25)

Now, I have a question. So it's like, well, I'd love to use this thing. But we use Cerner, which requires you to very laboriously type everything into certain fields, or it won't recognize what you said.

 

In the clinic, there's a billing program. And unless you put review of systems where review of systems supposed to go, it thinks you didn't do it. It is not very AI sophisticated.

 

So how can I use this device when everything has to get typed into Cerner?

 

[Rashie Jain] (21:26 - 23:25)

Right. So we offer integration with most EHRs. We are actually in the process of initiating integration with Cerner as well.

 

Cerner is actually one of the EHRs we're still in the process of integrating with, but we're already integrated with Epic, Athena, Virodyne, the usual suspects. And you point to a really important, I guess, a piece of the puzzle that needs to be solved for these technologies to be adopted in any meaningful way, which is that they need to seamlessly integrate with the EHRs. And EHR integration essentially is three things.

 

The first is that the AI software has to have the ability to pull appointments for the day. So the doctor can see all their encounters against which they want to do the recording. The second is the ability to push the appointments, sorry, push the notes back into the EHR.

 

And the point that you alluded to, which is that the review systems need to go in review systems, HPI needs to go in HPI is critical here. So you need ability to do section specific integration. And then the third feature that all of these AI software should have is the ability to pull historical data from the EHR and create patient summaries, which then get plugged into the current encounter.

 

And this is critical for returning patients because as you know, for specialties like neurology, a large part of any team's time goes into reviewing these historical notes and creating these summaries of returning patients. A lot of times it's nurse practitioners or medical assistants who do that. It's very time consuming and AI can really augment their efforts by essentially digitizing the whole thing, pulling all the data automatically.

 

So these are like our three things. This is a three-pronged approach when we think about integration with EHRs. And I think if you can achieve that, then you've truly created an experience that saved significant amount of time without really creating any inertia from the provider base in just adopting that application.

 

[Andrew Wilner, MD] (23:26 - 23:40)

Right, so those challenges are why I pick something easy like neurology instead of something difficult like engineering. Are those challenges, those in three levels of integration, are those actually something you can accomplish?

 

[Rashie Jain] (23:41 - 24:38)

Yeah, these are, I wouldn't say the most difficult engineering problems, but these are definitely the most tedious. And the reason is that when you think about integration with any EHR system, you're essentially talking about integration with a new software every time. Every EHR would have their own proprietary APIs.

 

Some of them would have HL7, FHIR. So these are like all different types of integration options. And it's not like you've created one playbook integrating with one EHR then you can replicate across.

 

You have to build the whole integration set up from scratch. So it's really laborious work, but I wouldn't categorize it as the most difficult engineering problem. I think creating custom clinical notes for an epileptologist and meeting their expectation is a much harder engineering problem, honestly, to solve.

 

And this is, I guess, something that you just have to do for your software to get adopted.

 

[Andrew Wilner, MD] (24:39 - 26:18)

You know, in the old days when a patient was hospitalized for a week or 10 days or three months, it was kind of a point of honor for the physician who was in charge of that hospitalization to write a detailed discharge summary of what had happened so that the doctor picking up outside in the clinic when the patient came would know, oh, you had this test and that test and they were thinking this, but they decided to do that. So now let's pick up the ball.

 

And somehow that doesn't happen anymore. Patients are just discharged with some discharge diagnoses that may or may not be correct with a list of tests, but without the results. And somehow that satisfies whatever the requirement is for discharge summary.

 

The doctors are very busy. They don't get paid any extra to do the discharge summary, apparently, which is probably the most important thing of the whole hospitalization. So it doesn't get done.

 

And I was shocked when I saw this transition because I used to spend a lot of time doing that. In fact, sometimes you would just come on service. In other words, you would take over responsibility of a patient who was going home that day.

 

And then it was your job to do the discharge summary. And you had to review like the whole three months of the patient's hospitalization and put it all together. Now, what I'm getting at is it would be a great AI function if AI could go through the hospital chart, right?

 

Everything's digital these days and summarize all of the tests and all the results and make some sense out of it and create a discharge summary. Have you thought about that?

 

[Rashie Jain] (26:19 - 27:21)

Absolutely. And that is something we do. So we are creating summaries from historical data the patient has that's already present in the EHR.

 

Now, of course, the context in which it's getting used currently is that these summaries are then getting embedded in the note that the doctor is creating. But that note can very much be a discharge summary. And I think you rightly pointed out that it's just, I mean, the expectation from the physician to doing it is, I mean, the incentives aren't really aligned because why would the physician spend so much time doing it?

 

They're already super busy. This is something that AI can do and AI will probably really excel at doing it because they'll be able to. It doesn't matter if you have 500 pages of report or 1,000 pages of report.

 

They can, with the same accuracy, with the same diligence, create summaries for you. So this is a task slated for AI for sure. And we definitely think that this has a lot more relevance in specialty care.

 

Where there could be many, many different reports and data points that need to be compiled together in the discharge summary.

 

[Andrew Wilner, MD] (27:22 - 27:35)

Well, let's just talk for a minute about the practicalities. Let's suppose I say, OK, all right, Rashi, I want to use this thing. I've got patients scheduled tomorrow.

 

What do I have to do?

 

[Rashie Jain] (27:37 - 28:51)

Nothing. You just need to download the app. You log in and then you just start your recording.

 

It's as simple as that. You don't need to do anything. If you're integrated with your EHR, we'll already pull in your appointments for the day.

 

So when you log in into your application, you will see that patient card. You just click on it. You just start your recording.

 

And then once you're done, you click on process and then 30 seconds to two minutes. The Marwix will generate your finish note. We already create custom templates for you.

 

So let's say you have different templates for epilepsy patients, different for dementia patients. You can just select the relevant template from the app. Then the note will get created in that format.

 

Then one quick thing I'll add. Marwix just doesn't summarize the conversation that you're having with your patient. It also plugs in data that was never verbalized, but that needs to be there in the note.

 

For example, your macros that you would in a normal situation, put it in yourself or somebody in your team would do that for you to complete the note comprehensively. Marwix does that automatically by inferring the context from the call and plugs in the relevant macros for you. And so it creates a finished note, completely finished note that doesn't need any post-processing.

 

[Andrew Wilner, MD] (28:53 - 29:10)

Now, suppose I forget to ask some very important questions. For example, migraine. It's important to know is there photophobia and phonophobia, but I don't know.

 

I get distracted and I don't ask. Will Marwix prompt me to ask things on the template?

 

[Rashie Jain] (29:10 - 30:36)

We don't do that yet. And that's because a lot of times, so this is, I would say this is a, this is a point of contention. Honestly, you have two camps really in the provider community.

 

This one camp that thinks it's a really great idea for AI to interject and prompt the physicians to sort of help them improve their, I guess, the patient care experience that they're providing. And then the second camp believes very strongly that AI is here to just make my life easy and remove all the administrative burden for me. But it's crossing the line if they get into the clinical recommendation stuff.

 

So we're treading safely here. Having said that, we do have a differential diagnosis feature in beta right now that interested users can try where it would identify based on the data that was presented in the transcript, potential diagnosis. And if something was missed, it would identify symptoms that were missed.

 

And from that, so you could see that document and make up your mind if this recommendation is appropriate or if there are more discussions on certain symptoms that need to be had with the patient to conclusively identify the diagnosis. So we assist that through this DDX feature, but this is in beta and this is not something that we intend to make mainstream anytime soon.

 

[Andrew Wilner, MD] (30:38 - 32:40)

Well, I'd love to participate. I got this great idea. So here you are, you do the interview and I'm wearing a little earpiece and I press the button to process.

 

Then there's this little AI voice and it goes, hey, Dr. Wilder, you forgot to ask about photophobia and photophobia. So I could just say to the patient, okay, just give me a moment. Oh yeah.

 

And tell me a little bit more. I mean, it could be like a little, it could be annoying, but a little helper voice there that, you know, it's going to get me from doing a 95% job to a 99% job, you know, getting it all done so I don't have to call the patient back or have another visit or it's, oh gee, I should have asked them that, but they're gone now. I'm not going to, you know, give them a call just to ask that simple question and just blow it off and wait till next time.

 

It would be nice to have a little helper. Okay. Well, this is pretty, actually more exciting than I anticipated.

 

I did use AI yesterday. I had a meeting by Zoom just like this and some AI assistant popped up. So I figured, well, I'll let it do whatever it does.

 

And it generated a summary of our meeting. I always take some notes and the notes were pretty good. Notes were pretty good.

 

And I was like, you know, it's definitely better than nothing. The tone of the notes wasn't the way I would have put it, but maybe you could learn, you know, to take notes my way. But I was impressed.

 

It was a whole lot better than my experience with Siri and that kind of, you know, the ones where, you know, they're talking on the telephone and they tell you, well, what is it you want to talk about? And I'll say, you know, retirement. And they'll say, oh, okay, you want to talk about your bank account.

 

It's like, no. And, you know, it's just a total waste of time. That's annoying.

 

All right, so let's see. Do you pay per patient or per month or per year or per license or how does all that work?

 

[Rashie Jain] (32:41 - 33:14)

Yeah, it's for provider per month. So we issue licenses for every provider and provider for us could mean an MD, an APP, medical assistant, right? And, yeah, I mean, we have a bunch of different plans to choose from, but essentially you could go for full integration where we would do an EHR integration for you for free and you just pay a little premium for that service.

 

But yeah, it's pretty flexible. You can choose how many providers you want to enroll for this particular program and pay as you go.

 

[Andrew Wilner, MD] (33:15 - 33:20)

And once you have a, when the provider could see 10 patients that month or a thousand and the AI doesn't care.

 

[Rashie Jain] (33:21 - 33:27)

Yeah, it doesn't matter. I mean, we don't have pricing based on usage. We just base it on the license.

 

[Andrew Wilner, MD] (33:27 - 33:58)

Neurologists tend not to, I was exaggerating. Don't see usually, because each interaction takes quite a while. So neurologists, they're not like dermatologists, you know, see 40, 50 patients in a day.

 

We don't practice like that. It doesn't work. Ah, very important.

 

So there is this phenomenon where AI can confabulate, right? It can just make stuff up and that does not seem to be a well understood problem. What about your scribe?

 

Is it making stuff up?

 

[Rashie Jain] (33:59 - 35:43)

First of all, I have to, I mean, kudos to you that you said confabulate and you didn't say hallucinate because it's a very interesting. So it's a misnomer. A lot of times people would talk about this problem, but they would say AI hallucinating when in fact it's actually confabulating.

 

So I appreciate you saying that word. But yeah, it's a complex problem because I mean, confabulation typically happens because there's a leakage from the training data set. You're trying to train the AI models to create a really complex output.

 

And then of course, sometimes you would see that the training data set that was used sort of becomes the basis for the AI to sometimes generate the output, especially it happens in situations where the input quality is maybe 10. There's not enough information for the AI to process. And so this is a rare problem, but it happens.

 

And I think, again, if you're thinking about creating a perfect user experience, this optimization for confabulation, making the probability of that entire situation very, very rare. These are really hard technology problems that are really worth solving, which we spend a lot of time solving for. We've gotten to a point today where we take pride in the fact that our AI doesn't confabulate.

 

It wasn't static. It started with a certain percentage and we've seen it decline every single month to a point where this is a non-existent problem for us. But yeah, it's a very real problem for a lot of AI models out there.

 

[Andrew Wilner, MD] (35:44 - 35:48)

All right, last question. How is this going to be better in 10 years?

 

[Rashie Jain] (35:49 - 37:55)

Oh, I think the world is changing very fast because of generative AI. We are reimagining every piece of software in every industry and healthcare. I think, unlike in the past where healthcare industry was always a laggard when it came to adopting new software, this time healthcare is leading the cause because I think the problems is so much more acute in healthcare.

 

There's a real, real need for better workflow solutions that can completely remove administrative burdens. So in 10 years, I think you will see a lot more AI native applications across the care continuum, not just focused on documentation, but AI solutions that can automate your triaging experiences with patients and nurses. AI that'll completely automate patient intake, pre-charting.

 

Some of that we already do, but I think that'll become more mainstream. Automation of coding, compliance, claims management, dispute resolution with insurance. So you can think of multiple use cases where we end up spending tons of time and that is all going to get automated completely.

 

I guess software will become AI native in every which way. And ambient AI is going to be really big because in healthcare, I feel that a lot of these conversations that happen between the patient and physician or between two physicians, attending and the residents, they have critical medical information that oftentimes gets lost because there's only so much you can manually document. But ambient AI is really, really powerful.

 

Imagine a tool that can capture every piece of unstructured information from the point the patient walked in to the point the patient got discharged and make sense of all of that and structure that in the most comprehensive clinical documentation. I think that'll be really revolutionary from the perspective of just improving patient care as well. So I see that happening in a big way, but the key here is to build a solution that can optimize this across the care continuum.

 

It cannot be a point solution. And that's essentially what we're trying to do as well.

 

[Andrew Wilner, MD] (37:56 - 38:05)

Okay, so it sounds like you're going to be really busy for the next 10 years. Before we close, is there anything you'd like to add?

 

[Rashie Jain] (38:06 - 38:31)

No, I mean, I would just say thank you so much for this opportunity and I really enjoyed my conversation with you. It's so refreshing to speak with somebody who wears so many hats, not just a physician, also such a fantastic podcaster. I really appreciate the questions you asked me.

 

Got me into thinking a lot more about all the different things we have to do. So I appreciate that.

 

[Andrew Wilner, MD] (38:31 - 38:45)

Well, thanks very much for the compliment and I certainly enjoyed our little discussion. I'm looking forward to new AI and it looks like my own personal biases are evolving. So that's exciting too.

 

[Rashie Jain] (38:46 - 38:50)

Yeah, I appreciate it. Thank you so much. Thanks for having me.

 

[Andrew Wilner, MD] (38:50 - 41:06)

Rashi Jain, thanks for joining me on the Art of Medicine. And now a final thanks to our sponsor, locumstory.com. Locumstory.com is a free, unbiased educational resource about locum tenants. It's not an agency. Locumstory exists to answer your questions about the how-tos of locums on their website, podcast, webinars, and videos. They even have a locums 101 crash course at locumstory.com.

 

You can discover if locum tenants make sense for you and your career goals. What makes locumstory.com unique is that it's a peer-to-peer platform with real physicians sharing their experiences and stories, both the good and bad about working locum tenants. Hence the name LocumStory.

 

Locumstory.com is a self-service tool that you can explore at your own pace with no pressure or obligation. It's completely free. Thanks again to LocumStory.com for sponsoring this episode of The Art of Medicine. I'm Dr. Andrew Wilner. See you next time. Associate Professor of Neurology at the University of Tennessee Health Science Center, Memphis, Tennessee.

 

Views, thoughts, and opinions expressed on this program belong solely to Dr. Wilner and his guests and not necessarily to their employers, organizations, or other group or individual. While this program intends to be informative, it is meant for entertainment purposes only. The Art of Medicine does not offer professional financial, legal, or medical advice.

 

Dr. Wilner and his guests assume no responsibility or liability for any damages, financial or otherwise, that arise in connection with consuming this program's content. Thanks for watching. For more episodes of The Art of Medicine, please subscribe.

 

www.andrewwilner.com