As Medical Affairs teams face rising content volumes, tighter timelines, and increasing complexity, many leaders are asking the same question: Can AI meaningfully improve the MLR review process—and if so, where do we start? In this practical, scenario-based session, members of the MAPS Medical Governance and Compliance Domain will explore high value GenAI use cases and where it can supplement human judgment. We’ll also discuss governance essentials, and a logical roadmap for responsible adoption, giving attendees the clarity and confidence needed to begin their own AI-enabled transformation.
LEARNING OBJECTIVES:
- Understand the current situation and pain points with MLR
- Understand how GenAI tools can be responsibly introduced into the MLR workflow, highlighting high-value use cases and realistic implementation steps
- Understand key governance and implementation considerations

Moderator: Jaymesh Patel

Speaker: Anneka Kapur

Speaker: Dr William Mwiti
Following is an automated transcription provided by otter.ai. Please excuse inaccuracies.
00;00;05;04
MAPS
Welcome to this episode of the Medical Affairs Professional Society podcast “Elevate”. The views expressed in this recording are those of the individuals, and do not necessarily reflect on the opinions of MAPS or the companies with which they are affiliated. This presentation is for informational purposes only and is not intended as legal or regulatory advice. And now for today’s “Elevate” episode.
00;00;33;14
Jaymesh Patel
Welcome, everyone, to the MAPS podcast. I’m Jaymesh Patel, and I’m joined here today by Anneka Kapur and William Mwiti. My colleagues in the MAPS Compliance Area working group. And today we’re exploring something, many people are getting asked about…how a typical pharma company can start incorporating gen AI into the MLR process and help with various efficiency gains. Now, to help navigate this topic, we’re going to first start with a representative scenario that I think many teams are really facing right now. And here’s a scenario. We have a pharmaceutical company with an MLR team that is drowning. They’re reviewing 30% more content than last year. The headcount hasn’t really moved. Review timelines are increasingly compressed, and the senior leadership wants to use AI to streamline processes and gain efficiencies. But they’re not really sure where to start. Anneka. William, welcome to the podcast. Thanks for joining me today. What are your initial thoughts? Any reactions?
00;01;43;03
William Mwiti
Definitely agree with it. We are looking at many products getting launched, headcount getting increase on the commercial side. But the medical side, the MLR teams are not getting any headcount. So it becomes very, very tight in terms of getting material out there.
00;01;57;02
Anneka Kapur
Yeah, I totally agree. I don’t have anything extra to add at the minute.
00;02;00;17
Jaymesh Patel
Right. Okay, so let’s get into this in a bit more detail. There were some pain points highlighted in that scenario. I think it would be good to actually start with those pain points. William, you touched on a couple of them. You mentioned, there’s an increasing number of launches. Anything else in terms of these pain points where it really resonates and, you know, maybe increases the need for something like AI to help improve the efficiency.
00;02;28;09
William Mwiti
We’re looking at an environment in which we have multiple therapies, as we’ve talked about and multiple launches planned. And this is increasing, the kind of pressure that we tend to see in the MLR review process.
00;02;41;07
Jaymesh Patel
Yeah. And Anneka. Anything else from from that side, you know, in terms of these pain points that really hit home.
00;02;47;22
Anneka Kapur
I think another one for me is working with global materials and global teams. Those are quite frequent pain points. And that we have to, amend things or edit things or review things in short timelines. Again, going back to review cycles and the number of changes to materials.
00;03;09;01
Jaymesh Patel
Yeah, absolutely. And I think even, the, the science around a lot of the products is getting more complex. And, you know, reviewing that along with, the business strategy and the context around that, makes it all the more important, I guess, to make sure we do have a more streamlined and efficient MLR process.
00;03;30;13
William Mwiti
So from these pain points, I mean, the high content volume, the tight timelines, limited resources…what part of this feels most familiar to you guys? And what if you were to get a bit more granular? What are the points that, the audience needs to be aware of?
00;03;46;16
Jaymesh Patel
Yeah. I think, one thing I’ve certainly found is that we’ve always grappled with obviously multiple review rounds for, the same material. And within that, there’s always, I feel, a proportion of, basic issues or feedback that could be a target for efficiency. You know, how can we reduce those review rounds? If we can actually remove some of these basic issues upfront, that could be one potential area to look at, I think.
00;04;18;26
Anneka Kapur
Just to touch on review rounds and materials. I also think that there are a great volume and an increasing volume of materials that are required, and these materials are becoming increasingly more complex, with more complicated science, more complicated messaging with more intricate references linked, and also those questionable claims that, you know, do we err on the side of caution here, or are we being too risky, and being able to streamline some of those processes. Because they are quite common pain points.
00;04;53;27
Jaymesh Patel
Yeah, I think one one thing for sure is the volume of materials for various reasons, but certainly as we know, the scope of digital activities and materials has increased, which I think is certainly one reason for that.
00;05;11;25
William Mwiti
I think also looking at, fragmentation in this process, the global and local teams don’t have seamless ways of working and handoffs. And this is compounded even further where the volume keeps rising, as we said. And then you have a relatively static headcount, which makes it or creates a bottleneck in terms of some of these approvals.
00;05;31;23
Anneka Kapur
So we’ve looked at some of the current MLR challenges and pain points, but in this particular scenario that you mentioned, Jaymesh, if the VP asked if I can help fix anything with this scenario. Well, what do you think our thoughts could be? Because for me, the first step is understanding where human judgment is required versus where tasks are repetitive and rule based, and then where AI can be brought in into the process. What’s key for me… before I hand over to both of you, is that the individual teams and individual stakeholders need to come to together to collaborate closely and map out those processes to understand where time is being spent.
00;06;16;21
Jaymesh Patel
Yeah, I know, I think, that is, a really good point. And because I think AI is, one of those, obviously it’s a buzz word and, it’s easier to rush for gold, but, we actually have to, I think, look at the current process for the MLR and that actually, you know, I don’t think we can assume it’s the same for every company. And there can be lots of nuances and differences. So I think each company really needs to map out the process first, understand it well, and maybe even simplify it before even thinking about how we can integrate AI. And I think mapping it out will actually help teams visualize where AI is best placed. And as you said, then understand, where that judgement, is still obviously crucial. This is a good segway to, I guess the next thing, which is if a team is deciding on starting somewhere with AI, you know, even if it’s a pilot, where is that low risk AI , high impact opportunity to get that feel for what it could do? What are those use cases? I think is is what I’m getting at. And I think it would be good to explore a bit further. Any houghts?
00;07;34;21
Anneka Kapur
Yeah. I mean, from my perspective and in my own experience, it’s really a pre-submission where you should focus first. How can we get those materials correct before they even get into the MLR process? Is it that all agencies are using, AI to create materials, or to use the information that we have given them to create materials that are more streamlined, more on strategy, more compliant? And then when they come back to us or our commercial colleagues, for example, using AI to make sure those materials are in line with what they want to deliver on. Are the references correct?. Are they correctly linked? All those sorts of things can really help before a material even gets into the MLR process. And I think AI can really help there.
00;08;26;11
Jaymesh Patel
Yeah, I think you touched on a key, area, which I think certainly is in my mind and is for me quite a big bucket, which is the referencing. And I think a lot of teams devote a huge amount and resource, as you said, to the linking of references. And of course, to ensure the accuracy. And, I think it’s a big rate limiting step in the MLR process. So AI can certainly be explored in finding the best reference sources. And then also, of course, flagging where reference does not, support a claim. And also generally looking at, the accuracy of the material based on the tagged references. So I think within this whole referencing area, there is certainly a lot of scope, to look at it. It is a big area and probably does need a lot of thinking and work. William, any other thoughts from you?
00;09;28;19
William Mwiti
I think on the bit of repetitive tasks and, and all, we can look at, cross asset consistencies because sometimes you have is a parent job, from which multiple other child jobs feed or, you know, markets that have different claims based on what is registered, a different version. I think It can be used to sort of simplify those tasks, remove it to allow, the MLR team to focus on the strategy behind the pieces, as opposed to trying to look at different markets, different items to try and figure out, you know, what are the things that needs to be highlighted or need to be removed for certain markets. But I also think, more importantly, when you are looking at this whole process for the MLR feedback, It can be used to look at some of the patterns, some of the trends, that have been identified in certain jobs. Maybe they’re historical issues, the stuff you call feedback on certain jobs and be able to feed into models that can be able to sort of give some level of review, so that you don’t end up looking at the same, you know, clones of the same job. It would be good to have AI being able to sort of harmonize the review across the process. I think those are two things that you can use… for cross asset consistency as a use case or learning, you know from the historical reviews and using that for future reviews.
00;10;55;11
Jaymesh Patel
I think that’s a really good point, William. Actually, because, now that you mentioned that, I think we all know that there’s a lot of comments made about all that data doesn’t really go anywhere, you know, across different materials. So certainly looking and tapping into what those comments are, what are the repeated comments? Can we learn anything from that and potentially, feeding that higher up in the process into the content creation so that, you know, the submission quality is even better upfront? And I think, yeah, one more thing I actually wanted to mention is, and probably the big, big elephant in the room is looking at how AI can help with that regulatory intelligence. You know, spotting that risky language potential off label concerns, differences between regions. And I think this is obviously where currently the, you know, the signatory would be applying judgment. And I think it’s still important to be applying judgment, but could AI augment that process to help flag, where the content doesn’t align to the relevant, pharmaceutical code of practice? And I think that is certainly a key, key area to help, at least streamline and potentially make reviews quicker as well.
00;12;10;24
William Mwiti
All the stuff we’ve discussed sounds really promising. We’ve looked at those five potential cases here, but then you need to look at it from the flip side. What happens when AI gets something wrong? What are the failure modes that we need to plan for? Is this where the governance reality check comes in?
00;12;27;21
Anneka Kapur
I think that’s a really good point, William. And something that we do need to discuss, because before we get into the details of governance, we need to go back and think about what the model is that we’re looking at. Is it just an AI model or is it a human model, or is an AI and Human partnership model, which I think is key here, for the AI model to operate correctly. Going back to what you said earlier in the case studies, a human needs to be able to go in and check that what we’ve done is correct and where tweaks need to be made. So the human essentially retains the final accountability, but the AI model augments the review workflow. So now that we’ve cleared that up, let’s dive into some of the other considerations. So what else should the team be thinking about as they pilot some of these? I use cases in the ML process.
00;13;22;06
William Mwiti
I think I’ve set the tone for, you know, really diving into an accountability framework. What should be in this accountability framework? Obviously, we need to have clear SOPs for AI. And AI use in terms of what can AI provide oversight to and what a human needs to suggest? What are the key critical points that we need to have a human in the loop to validate? Because at the end of it all, as far as regulators are concerned, it is a company and a human that’s making some of the decisions and is responsible for these materials. We also need to figure out, you know, how do we log some of those decisions that have been made, especially whether it’s an AI review versus a human review?
00;14;00;28
Jaymesh Patel
Yeah, I think, William, that’s a good point. Actually. One thing I take from the accountability framework is we probably also need to look at the explainability of the AI model when it is flagging something. Comments and feedback. It needs to be able to articulate why. And I think that is key because, you know, there needs to be transparent and traceable rationales really to enable humans to actually validate those outputs. And ultimately, you know, that then calibrates trust. And then eventually, as you talked about, we need to make sure that whatever decisions we’re making, these are defensible under that human in a loop, governance model. And I think probably I think that then takes you off on to that the other aspect of validation because, i think it’s probably key that whenever a model or an AI, product is introduced into the MLR process, that at least it’s tested on a sample initially to see what the performance is like. I would say there probably needs to be obviously, qualitative feedback from, current reviewers to see whether it’s on target, but then also some sort of, scoring mechanism to get some qualitative, feedback as well. Sorry, quantitative feedback as well. And I think together that then gives a good overview of, you know, whether it can be taken forward.
00;15;35;28
William Mwiti
Obviously we can’t have a governance check without, looking at the data governance point of view. And this are very critical, especially since a lot of the AI models can potentially use internal, company confidential data to train external models. So every company must answer a couple of questions. My view is we need to understand where the data lives. Is this data private? In terms of, do we have any sandbox environments that prevents internal, company sensitive, data from training external model? What else is logged in? But more importantly, I feel that we need to have a critical, view of levels of control and who can access the model and ensure that it’s integrated into the current content review and digital asset management platforms? Mainly because these are already there. People are used to these tools, and it’s very difficult to start designing other new models. AI should be able to fit into some of these current digital asset management platforms.
00;16;43;01
Anneka Kapur
Thank you both. Just to add to that, I think cross-functional collaboration is especially important when utilizing AI in our projects because success depends on aligning science, regulation and patient safety, and no single team can manage that on their own. And I think it goes back to what you said, William, we have data governance, regulatory validation… that we need all those things in place and all those stakeholders around the table to ensure success.
00;17;12;26
William Mwiti
So, once the team aligns on governance and the governance requirements, how do they actually roll this out in a way that works? Is this time for an implementation rollback damage?
00;17;23;04
Jaymesh Patel
Yeah…., Good point. I think, it does I don’t think we, we can’t say that just doing the due diligence, itself is enough. I think part of implementing this is, of course, you’re working with systems and people, and that requires, looking at the overall process and, you know, the change that comes with it. So now that we’ve looked at the use cases, I mean, what else do we think? I guess, what else do we think that, you know, defines that project success.
00;17;58;00
William Mwiti
I think, the very first thing we need to look at is change management. Obviously with new, technology coming in, there’ll be concerns around job displacement. And we need to clarify the role of AI, the process. I think if instead it’s been said multiple times in the past that AI will not replace people’s jobs, it will only replace those people who do not know how to use AI. So it’s important that as part of change management, we address these concerns around job displacement. Otherwise it will be very difficult to get AI off the ground.
00;18;29;06
Jaymesh Patel
Yeah, yeah. And I think maybe from that perspective, even, make sure that the MLR teams are really set up to understand, the AI as well. But I think, from my side, I think another key point is that the teams, I would say should start small, pilot one use case or even one therapy area. And, as I said before, you know, maybe take a sample of, you know one team for example, and just look at evaluating the performance before expanding, to a more wider rollout. Certainly we know that AI can be unpredictable. And I think that’s all the more reason to start small, understand not just so much about the AI model itself and what it’s, giving out, but just also understanding what process changes you need. And as you said, William, what change management considerations you will need for a wider rollout.
00;19;35;24
Anneka Kapur
Can I just build on that, Jaymesh, a little bit with regards to the first pilot? I think this is a key consideration because this can ultimately set our teams up for success as to whether we are going to continue to use AI and how we’re going to use it. And I think building on what you said, the most important thing for a pilot for me would be a very low risk but high impact project that would use existing data or existing material to support but not replace human decision making. And that way we can demonstrate and build confidence with our regulatory teams. Are compliance teams any other uses of the project and also our leadership? Because ultimately to roll it out moving forward, we have to get everybody’s buy in. So for me, low risk means that we’re protecting the company and also our patients using existing data. So like I said before, our own projects or own products or materials that we have worked with quite regularly, so we’re quite familiar with them and be able to justify, you know, the return on investment. What are we delivering and how will we delivering it to ensure we get value while keeping our experts part of those conversations to make sure that we’re refining it every single time. So once the pilot has completed, it can be rolled out seamlessly.
00;20;51;22
Jaymesh Patel
Yeah, I think building confidence is certainly a key thing. And, and I think part of that really is that we do need the data to back that up. And so for that reason, I think it’s also important. And I mentioned a little earlier that we do need good definition of what success looks like. Clear scoring mechanism for validating these tools. Clear reviewer feedback. And as I said, for different companies, success could mean different things in the model process. Whether it’s the reduction in review time, the number of rounds or as we said earlier, just to be able to help the team manage, the current or the increasing workload, more efficiently. So, you know, there’s lots of different metrics, that could be looked at. But I think that really needs to be defined by that individual team, working on the pilot or the rollout.
00;21;52;04
William Mwiti
I think it’s also important that we look at the vendor and the technology selection process. We need vendors who are familiar with the MLR process, because then they’ll be able to design solutions that work within the, MLA process. We also need, as I said earlier, secure data environments. No one wants that data being out there used by other, competitors or, you know, the public. We need to have secure data environment. And once we have clear sandbox environments to be able to do this kind of work. And finally, as I said earlier, a solution that integrates within the existing review and approval systems will make it much, much easier to be able to, integrate AI into the process. It will not take a lot of time to try and driven when they will. If it does be some plug and play.
00;22;41;04
Anneka Kapur
So I guess at the at the tail end of this roadmap, once AI has been rolled out and piloted and starts to become embedded, I think the real value will be created when it becomes an integral part of how we work and not something that we just try out on the side. So in order to do that, we also need to take into consideration our SOPs, training and feedback loops. Because if It isn’t built into our SOPs, it really won’t be approved to operate. It might be used inconsistently, there might be a compliance risk or no audit trail. So we really need to build it into our day to day, so it’s clear when and how it can be used. With that defined human oversight and to obviously ensure we get repeatable quality every single time. And of course AI changes how people make decisions. So without training, people don’t necessarily understand or trust the outputs, but with training they have the correct interpretation and they use it more safely and probably more often. So in that regard, I think feedback loops are also important once you have rolled out SOPs and you’ve rolled out training and you’ve had a chance to work with, AI in certain projects. Having that collective and that cross-functional collaboration again where people can feedback to ensure we’re continuing to develop and respond to AI, to make sure we’re using it as best we can.
00;24;01;26
Jaymesh Patel
Yeah, certainly. I think, I mean, what do you think, Anneka, about this whole concept of, you know, maybe we obviously start with the training, but certainly it seems to me that you will need constant validations and trainings at various intervals, even after it is rolled out.
00;24;21;13
Anneka Kapur
Yeah, absolutely. I think AI changes so quickly and in line with that, what we do in pharma changes so quickly. One day you could have something that we think is a relatively straightforward project and then be given something that is extremely complex and maybe something that you’ve never worked on before. So we can really use AI to support with those models, but keeping those humans as part of that loop to make sure that we are learning from what we’re doing, to repeat it again successfully and maybe even get better.
00;24;51;21
Jaymesh Patel
Yeah, it is the journey. I think that’s the takeaway from that. And, I don’t think it, there is a destination as such. From what you’ve said, it’s certainly something that requires ongoing, judgment and evaluation. And I think we’ve gone on a bit of a journey through this podcast. We started with, pain points. We looked at some use cases, the governance, what an implementation roadmap could look like. And finally, I think it would be fitting to really look at perhaps what the future outlook could look like. And yeah, I think we can maybe just go around one by one and to see what, final takeaways or thoughts are so, William, I’ll start with you, if that’s okay. Any final thoughts?
00;25;43;07
William Mwiti
in terms of, the future outlook and in terms of how success in AI in MLR review process can look like, it will mean reviewers spending less time on repetitive checks, maybe 50 to 60% less. And in the process also reviewing, reducing their cycle times for the review.
00;26;02;07
Jaymesh Patel
Great. And, Anneka, any any thoughts from you?
00;26;06;00
Anneka Kapur
Yeah. I think success for me is reviewers and signatories being able to have less time spent in MLR cycles, and more time to be spent on strategic consultations and other high value work. Really, where impact can be made in pharma.
00;26;21;22
Jaymesh Patel
Yeah, absolutely. I think, we all know there’s a lot of grey areas and areas where review teams can help strategically. So certainly that seems like a great potential, you know, value add. For myself, I’d actually think, the business impact of this could be large, especially where if we do get the efficiencies that we want, this could support, you know, quicker of course, more compliant materials. And I think finally I would say certainly potentially also help us manage the increasing workload, within the MLR space, but also potentially reduce, costs as well, with how materials are created. Because if we can get the content better upfront as well, that can potentially obviously reduce the the number of rounds of review, as we said. And the reduced timelines overall, I think can hopefully, translate to reduced overall cost, which is, I guess, always a good thing for the business. So thanks, all for joining me on this podcast. Really appreciate the time. William and Anneka, it was great talking to you. And we hope, that people listening to the podcast find it useful and look forward to further discussion on this topic within the MAPS community. Thank you.
00;28;04;21
Anneka Kapur
Thank you, Jaymesh.



