Connect with Us
602 Park Point Drive, Suite 225, Golden, CO 80401 – +1 303.495.2073
© 2023 Medical Affairs Professional Society (MAPS). All Rights Reserved Worldwide.
Using AI to Accelerate Diagnosis and Treatment
In today’s episode, we’re talking with Joseph Zabinski, PhD, Senior Director, AI & Personalized Medicine at OM1 about Artificial Intelligence — specifically about how AI can accelerate diagnosis and treatment. This episode is sponsored by OM1.
Garth Sundem 00:00
Welcome to this episode of the Medical Affairs Professional Society podcast series: “Elevate”. I’m your host Garth Sundem, Communications Director at MAPS. And today we’re talking with Joseph Zabinski, PhD, Senior Director, AI and Personalized Medicine at OM1 about artificial intelligence, specifically about how AI can accelerate diagnosis and treatment. This episode is sponsored by OM1. So first of all, Joseph, welcome.
Joseph Zabinski 00:32
Thank you. Good to be here.
Garth Sundem 00:34
And so we’ve been hearing about AI in healthcare for a long time. And can you start by catching us up with where we are now? And maybe how this conversation would be different if we were having it three or four, four years ago? Where are we and what has recently happened?
Joseph Zabinski 00:54
Absolutely. Yeah. I mean, AI is certainly been around for a while, you know, it’s sort of like, yeah, ask people how far back it goes. And you’ll get a further back answer, the older the person that you ask a question of. I like to joke that, you know, I would have majored in data science if it had been a major, and I’m not that old, but it wasn’t, you know, back in 2010, when I graduated from college. But with respect to AI, and healthcare, I do think quite a bit has changed just over the past three or four years. I think at that time, three or four years ago, if you remember the buzzword, big data, people certainly still use that. But at the time, it was really about an awareness that large datasets in healthcare existed or accessible were sort of democratizing, it was easier and easier to observe more about people’s health in a scaled way. And three or four years ago, you know, the question was, what can we do with all those data, using these AI and predictive modeling machine learning tools? And I think it was, you know, worthwhile to ask this question is, will this stuff actually work? Nowadays, the focus, I would say, has shifted much more to we know the underlying technology works. In the technical sense, can we actually do something with this in the real world when it has to interface with people who are not AI experts, but with the clinicians, with the patients? Can we get it to sort of that stage of maturity, and I think, you know, the period, we all spent, or many of us spent working from home during COVID accelerated this, because for a lot of folks, you know, they weren’t able to travel and go in the office, but they were able to sit at home and look at datasets and the conclusion from those datasets and say, there’s actually something here in a business case to clinical case, beyond just the technical aspects. Let’s let’s do something with it.
Garth Sundem 02:41
Oh, interesting. So are we at a tipping point between the theoretical and the practical, then, like, yeah, now we’re gonna use it?
Joseph Zabinski 02:48
Yes, that’s a great way of putting it. I like it, the tipping point between theoretical and practical. Yeah, I think this time we’re in right now is well described as a tipping point of that sort.
Garth Sundem 03:00
Okay, cool. So we’ve got big data, and increasingly, so we’ve got tools to evaluate it AI tools to evaluate it, and increasingly so. So, you know, we are understanding more about diseases, but patients still spend years searching for a diagnosis. So why why is that? And how can AI help?
Joseph Zabinski 03:25
Yeah, so this question of time to diagnosis is obviously one that’s been around for as long as there have been people getting sick. And I think what the promise of big data and AI have been in the realm of diagnosis has been, you know, let’s, let’s observe these things about people and then have, you know, what we can observe, tell us something about them sooner in terms of what’s actually, you know, underpinning, for example, symptoms they’re experiencing, so they can get a correct diagnosis. I mean, I think a lot of the challenges if you think about it, they make sense. And we’ve all experienced them as patients, right, you know, you have some symptoms, maybe you wait a few days before deciding to check in with your doctor, then you’ve got to get an appointment, your doctor may ask you to get some tests or some images that take some time, those have to be interpreted, you might have to be, you know, forwarded on to a specialist for a consult. All of that just has an operational cost in terms of time, of course, people fall through the cracks along the way, you know, sometimes people, you know, may have challenges with their insurance coverage in the middle of a sort of a diagnostic journey that interrupts the whole process. And then there’s just the case that, you know, this is especially true with rare disease or harder to observe disease. It’s just difficult for health care practitioners even, you know, operating at the peak of their powers to observe things. You know, some rare diseases have average times between first symptoms and diagnosis of 567 years. Just because, you know, you have to see a bunch of people to narrow down the list. Makes sense that it’s nobody He’s first guess. But it can be very hard to get to that answer. So that’s that’s kind of where I think the problem comes from the value. And the benefit that AI and data bring in these cases, is the ability to sort of step back and say, at the 30,000 foot view, let’s look for the smart patterning information that is present in patients who ended up with a certain diagnosis. AI is really good at finding patterns. So you pick a group of people that you say they all ended up at this endpoint, for example, getting diagnosed with a condition, AI, tell me what happened to them two years before they got to that diagnosis, or three years or five years, whatever it is, and then helped me look for those same patterns for people who are at that stage in their in their journey, who have not reached diagnosis yet, but who are much more likely to that’s kind of where the AI can come in with that 30,000 foot view. And then personalized down to say this patient looks like they have that pattern.
Garth Sundem 05:57
That’s interesting. So finding patterns. So AI finds patterns and is trained to find patterns by by seeing who had certain diagnoses, and then looking back at the data to be predictive. So is this happening now? Or are we using AI in the theater of diagnosis now?
Joseph Zabinski 06:19
Yeah, we’re getting there. I mean, it’s certainly true. Kind of to the earlier point, we were discussing that there are plenty of applications of this that have been proven sort of on the computer in terms of real world implementation. That’s the era we’re entering now. And you’re beginning to see this with I would say, some of the, you know, the diagnostic applications that are closest to the diagnosis itself, things like having AI read, you know, imaging scans and saying, Is this someone who should be flagged for an evaluation for, you know, lung cancer, whatever it may be? And I think as well, you know, some of the work we’re doing, no one is stepping further and further back in time, and saying, Can we notice some of the earlier, more subtle patterning in people who, you know, maybe on the way to a diagnosis, but aren’t there yet, one of my favorite examples of this is, you know, a lot of times, you can gather useful information by looking at what physicians have ruled out. So for example, a physician might say, let’s get a panel to test whether you’re allergic to a bunch of things the panel gets ordered, the response is that you’re not allergic to any of the things tested, right, that helps rule things out for that physician. But if you look at it in this patterning way, you might say this is kind of a common set of processes that all the people who end up with a certain diagnosis go through, they all go, you know, to the doctor, the doctor always says get tested for allergy. You know, it doesn’t mean that the test is wrong to do, but it’s one piece of evidence that the AI can help us see earlier and then say, maybe there’s something else going on with this person, you know, when combined with other data?
Garth Sundem 07:57
Oh, right. Because if someone is being evaluated for, for a bunch of possible conditions, then you’re looking at an ecosystem of possible conditions, you’re narrowing down what you’re looking for. And the AI can say, Oh, well, you know, the doctors thought it was this, the doctors thought it was that, eventually they’re going to get to downline, eventually, they’re going to get to this diagnosis that they haven’t even considered yet.
Joseph Zabinski 08:21
That’s right. And it’s exactly the eventually part of it, where the AI is helping to accelerate it, right, I sometimes call this sort of the value of negative information, AI is good at looking at things that didn’t happen, or, you know, results that came back negative or whatever. And putting those together to say, if we, if we press fast forward, where’s this story likely to end up?
Garth Sundem 08:42
That’s interesting. I mean, you think about, I don’t know, you think about evaluating all the things that do happen, right? Oh, this person has high blood pressure, this person has a family history of XYZ, this person, you know, previously had, I don’t know, pneumonia or something. Those are all the things that did happen, and you think those would be predictive. But the things that didn’t happen, can be equally are also predictive.
Joseph Zabinski 09:06
Yeah, they can be they can be very informative. And you’re putting your finger on a really important advantage of using AI in some of these, looking for patients who are most likely to have certain conditions. And that’s the ability of the AI to weigh different factors and to come to some sort of synthesize conclusion, because you’re absolutely right. I mean, the positive stuff is really important to the symptoms. People are experiencing, you know, family history, whatever it may be. The problem that is always faced in the healthcare system, you know, especially when you’re when you’re saying let’s go help out with diagnosis in a particular condition is where do you look, you know, and it’s always a question of how you weight these different factors. AI is good at patterning and it’s good at weighting factors to say from a large group of people here are sort of my composite best guesses of who is most likely to have the condition that we’re with Thinking about makes sense, you know, for the doctor to take a look at them, see if that’s the case. All right,
Garth Sundem 10:04
so now we’ve diagnosed them. And of course, at this stage, it’s just as easy as treating them, correct. That’s not exactly the case. So how exactly let’s move over to treatment, AI help predict treatments, especially maybe in the rare disease space, where we’re still looking around for what really works best?
Joseph Zabinski 10:28
Yeah. Yeah, it’s a good question. And it’s, it’s definitely the next one in the sequence of events, right? First step is sort of figure out what’s going on with the person. And then once we know that, say, you know, figure out how to help them best in a personalized way, you know, I like to think that AI is the bridge, between the insights we can see in very large data, very large patient populations, but sort of bridging down to the person level and saying, like, for most people, you know, drug A might make more sense. But for person, you know, it’s other person, it might be drug B, what the AI is doing in those cases, is kind of similar conceptually, to the diagnostic case, though, some differences. And really, what we’re saying is that, you know, given someone who has a particular condition, you know, say rheumatoid arthritis, for example, are they likely to benefit if they begin treatment with a certain medication. And we can do that by by measuring whether other people have benefited from that medication, in terms of their disease activity, for example, and then, you know, using the AI to make a prediction for the individual, we can also do it in the other direction. And we can say things like, what is the risk that someone will not be able to tolerate this particular medication, or that they’ll, they’ll have some sort of, you know, adverse reaction to it, or that it just won’t help to improve the symptoms that they’re experiencing, or the progression of their disease? And again, these are important questions, because a lot of times, nowadays, the way this works, if you ask the physicians is they’ll say, I know the literature, I’ve been practicing for a long time, I know how to treat patients. But when I reached the last branch, in my sort of treatment decision tree, there’s still uncertainty, I know that the drug I’m considering for this person will work in eight out of 10 people, but I don’t know if this person in front of me is one of the eight or one of the two out of 10, for whom it won’t work, the only way to find out is to start them, and to see how they respond. And that’s where at that point in the decision process, if the AI can help provide a little bit of additional information, that’s where it’s most useful.
Garth Sundem 12:37
It’s interesting, we’ve had, you know, various mechanics to ask this question of efficacy for a long time, you know, one of them was clinical trials, asking, Will this drug work for these people, then it’s almost as if AI, is, is allowing us to ask that same question, in a much more personalized way, in sort of this real world, in sort of this real world population? Is that how you say it, and also more holistically? It’s not just safety and efficacy? You know, you’re saying, maybe we could look at tolerance, you know, maybe we can look at a more holistic evaluation of will this drug work for these people? Is that sort of what you see, like a richer understanding in the real world?
Joseph Zabinski 13:23
Absolutely. You know, I think clinical trials are designed by intent to ask very rigorous questions and to try to control away as many confounding factors, as can be, you know, controlled in the interests of saying in a in a causal way does, you know, use of this particular treatment have a positive effect for the patient. And that’s important, obviously, to establish that, you know, treatments are safe and effective. But in the real world, there’s a whole lot more variability than can be captured in the clinical trials, because those trials are trying to mute out a lot of those confounding factors. I think the the AI and its application to real world data, those data sort of everything beyond clinical trials that we can observe about people’s health in the real world. That is exactly where the AI can pick up on things like saying, you know, are people actually adhering to this medication? Not in the rigorous process of a protocol driven trial, but But you know, in actual daily life, are they? Are they more or less likely to take this medication? Or what are some of those other factors? This can even be true at the level of how you define, you know, what a condition is or what treatment is, you know, some of the the work that I’ve been most excited about this year that we’ve been doing, has been in mental health around this question of treatment resistant major depressive disorder. Turns out that lots of people with depression experience resistance to treatment, but beyond that actually nailing down What we mean by resistance is intensely complicated, highly debated, it’s not even clear, you know that there’s consensus about what that means. So figuring out sort of, who we want to consider as as belonging to that group is already a challenge turns out to be a challenge AI can help a lot with. But then, you know, further saying, Who is most likely to experience resistance in the future is a really powerful question to ask as well, because all of these different dynamics we’ve just been talking about in terms of response, adherence, all these things can go into that question and to say, today, when I’m considering starting this patient, you know, what are their chances of actually benefiting from this drug at a minimum, you know, that helps the clinician be a little bit more vigilant about, you know, sort of taking earlier notice of failure of treatment, right doesn’t mean that it’ll necessarily change their behavior in every case, because the decision is always ultimately, the physicians and the patients. But if we point out sort of, there’s an elevated risk here of this not working, you know, it can help inform correct treatment or corrected treatment sooner.
Garth Sundem 16:11
So it sounds like this is I mean, it sounds like this is the panacea or the silver bullet. And anytime you need something predicted we should do we should use AI and predict it. However, are there things we should watch out for? You know, are there quick wins and straightforward applications? Or are there also, conversely, pitfalls? What should we look for? And what should we look out for? When we Yes?
Joseph Zabinski 16:36
Yeah. So you know, it’s interesting, a topic, sort of as this field matures, it’s come up more and more over the past year or two, I would say, are questions of bias and equity with respect to AI, you know, how its trained, how it’s applied. And those are valid and important questions. I think, you know, an interesting property that I’ve observed in my career, with sort of AI applications and healthcare is that the AI will never lie to you. But it can answer your questions in ways that you didn’t expect, sort of like, if you remember, you know, the genie story where you have to phrase the wish that you make very carefully or you can end up with sort of a pathological way of getting your wish answer, that’s sort of technically true, but you know, you end up dead anyway or something like that.
Garth Sundem 17:25
Literal, it doesn’t it doesn’t exactly that you’re not being you have to be perfectly literal, in what you ask.
Joseph Zabinski 17:32
Yes. And you have to be careful in how you ask the question how you define the population, that you’re thinking about how you define the endpoint that you’re thinking about. And I think that, you know, in terms of pitfalls, some of the sloppiness that I’ve seen in this field has been when people either just are sloppy with sort of how they’re how they’re asking, and then defining the parameters of these questions. Or, and, or, you know, often the case, when people sort of take the idea that, that AI is, you know, the hammer and everything is a nail, I mean, maybe AI is the panacea for everything, but it but only sort of in the context where we’ve we’ve fit the tool to the problem, I think that’s, that’s a really crucial thing. It’s subtle, and it takes experience to sort of know how to do that. But that’s, that’s something to watch out for, in terms of, you know, the immediate, immediate benefits of something like these applications, you know, I really liked the tipping point framework, because what people are digging into now are the hard parts of implementation. I think, for a long time, the idea was, well, if we can, you know, if we can just kind of get the technology working right, then all the tedious details of like, putting this in somewhere, will kind of sort themselves out. Turns out, that’s not true. But I think the thing to go after those instances where you’ve already got a setting, for example, in clinic, where you’ve got, you know, a team, could be a care management team could be some sort of treatment review, group, or even the physician themselves, who would say, if I just had this piece of information in my hand, I would be better off my patients would be better off sort of like the example I gave before of the physician thinking about the medication that would work and, you know, 80% of patients, they would say, at that moment, if I knew, or if I had some guidance from from an analytic tool on if this patient was in the 80%, or the 20%, that would be helpful, you know, sort of picking those specific locus points where, where the, the AI information can come in, I think that’s the way to get the quickest wins. It’s a little bit in the weeds and close to the ground, but that’s where, you know, the credibility can be built
Garth Sundem 19:46
from. Now you say credibility, and I was going to follow up on exactly that. So if I’m a clinician, and I have someone in front of me who’s either in the 80, or the 20, and the analytic tool says that it’s in the 20 Am I really going to believe that and change my treatment based on that is there is a pitfall, still the credibility when AI is brought into human systems?
Joseph Zabinski 20:08
You know, it’s it’s an interesting question. And it really is something that we will need to continue to work together with clinical colleagues. And when I say we, I mean, you know, folks who come from sort of the AI or data science branch of, of this approach that, you know, we tried to do this a lot on one because we’re very much a healthcare company that uses these technologies, to answer questions, as opposed to a tech company that just decided to do health care, but it’s hard. I mean, building credibility, particularly with clinicians requires that you are super transparent. You cannot use current sort of tech jargon and gobbledygook. You need to be able to explain that, you know, a tool can be wrong in some cases, and still very useful. And, you know, to your ultimate question, you know, is the physician going to change their mind? It’s tough to get someone to change their mind, if they’re sort of 100% located in their belief about what they’re considering. But what I’ve observed, and I think, you know, the clinicians would agree is that there’s plenty of instances where they themselves would say, I don’t know, you know, I don’t know which, which the answer is here it is, you know, do I go left? Or do I go, right? So we don’t need to sort of focus on those cases, which may be rare, where the AI is truly saying, you know, you strongly believe X, but it’s really why we can just say, you know, you’re you tell us where you need the help. In terms of information, we’re, you don’t know which way to go. And that’s the point at which we can help. Alright, so
Garth Sundem 21:43
AI is becoming usable. Let’s use it and leave it there for today. So thanks, Joseph, for joining us. To learn more about how your organization can partner with OM1 to use AI empowering diagnosis and treatment, visit om1.com. MAPS members don’t forget to subscribe. And we hope you enjoyed this episode of the Medical Affairs Professional Society podcast series: “Elevate”.
602 Park Point Drive, Suite 225, Golden, CO 80401 – +1 303.495.2073
© 2023 Medical Affairs Professional Society (MAPS). All Rights Reserved Worldwide.