This podcast looks at the identification and mitigation of risks in four examples of activities at the Medical/Commercial interface with the intent of strengthening your understanding of how to conduct these activities in a compliant manner. The examples include Medical/Commercial field interactions, patient support programs, advice seeking, and use of artificial intelligence.
Moderator: Jon Dixon
Speaker: William Mwiti
Speaker: Anthony Scott Greco
Speaker: Jessica Santos
Following is an automated transcription provided by otter.ai. Please excuse inaccuracies.
00:00:00:00
MAPS
Welcome to this episode of the Medical Affairs Professional Society podcast “Elevate”. The views expressed in this recording are those of the individuals, and do not necessarily reflect on the opinions of MAPS or the companies with which they are affiliated. This presentation is for informational purposes only and is not intended as legal or regulatory advice. And now for today’s “Elevate” episode.
00:00:33:08
Jon Dixon
Welcome to this podcast on risk mitigation and compliance at the Medical Commercial Interface. This podcast is based on a workshop that we ran at the Americas meeting in New Orleans, and at the EMEA Meeting in London. My name is Jon Dixon. I had 36 years in GlaxoSmithKline primarily in Medical Affairs, medical communications, and particularly medical governance. And I now work as an independent consultant. I’m very pleased to be joined by three highly experienced experts who will now introduce themselves. And we will start with William.
00:01:14:02
William Mwiti
Hi, my name is Dr. William Mwiti. I’ve been in the pharma industry for the last 14 years. I’m currently serving as the Head of BioPharma Medical for AstraZeneca Sub-Saharan Africa. I’ve been in Pharma leading the Governance Risk and Compliance piece in the last 15 years or so I also serve as a Medical Affairs Chapter Lead for Africa. Thank you.
00:01:38:16
Jon Dixon
Thank you. William. And now Anthony.
00:01:41:11
Anthony Scott Greco
Hi, everyone. My name is Anthony Greco and I’m the Managing Director with PwC. I sit within our risk and regulatory platform, supporting pharma and life science companies in various compliance and risk related matters, including a focus on Medical Affairs and associated risks within the function. So looking forward to sharing insights today.
00:02:07:01
Jon Dixon
Thanks, Anthony, and Jessica.
00:02:09:20
Jessica Santos
Hi. I’m Jessica Santos, currently Global Head of Regulatory Compliance and Quality Management for Oracle Life Sciences. I will share my experience with AI and how it affects our practice in general. Thank you.
00:02:27:00
Jon Dixon
Thank you. Jessica. So, the objective of this podcast is to improve understanding of the risks and their mitigation in various activities at the medical commercial interface. And the way we’re going to approach this is we’re going to talk about a few key principles. And then we’ve got four case scenarios of examples of the sorts of activities where medical commercial interface, is important. But also carries risk. First scenario is around the field interactions between medical and commercial. The second scenario is about patient support programs. The third is about advice and insights seeking. And the fourth is about the use of artificial intelligence. Obviously, a very topical topic. And then we’ll do a quick wrap up, at the end. So let me begin by talking about some of the important key principles for most activities, if we apply the principles of the codes of practice and adhere to the relevant regulations and related company policies and SOPs. Then we have a strong foundation for Mitigation of key risks. And what are those key risks that we’re all concerned about in our industry? Well, inappropriate or illegal promotion is a key one. Bribery and corruption is another. Patient safety, privacy, and inappropriate payments to third parties. Now, these principles are reinforced by training and other compliance activities, but all activities retain a degree of residual risk. And we need to be assessing that risk and mitigating it, in our decision-making process. There is no such thing as zero risk. I have come across 1 or 2 people who seem to think, that is the case, but life is a risk and doing nothing is a risk. So everything has some risk. And today, we have more choice in the options for implementation of our activities. Particularly with digital solutions and increasingly use of, AI and ML. Now, this creates opportunities. And can actually reduce some of the more traditional risks, but it also creates other risks. And so this is why it is important to always have a risk mitigation mindset. Now for anti-bribery and anti-corruption principles, we just need to be very aware of the legitimacy of intent of our activities, interactions and transactions. They must have a valid purpose, and be conducted in line with company values and policies. And of course, external regulations. We must always be transparent. Everything has to be done in an open, transparent and, way and properly documented. Things have to be proportional. Transfers of value made and resources invested must meet but not exceed the needs of the interaction or transaction. And we must not exert undue influence, and we must avoid conflicts of interest. So if we get all these basic principles correct, then we have already mitigated a huge amount of the risk that is associated with these activities. Now, with that in mind, we’re going to turn to the first, example, which is about field medical interacting, with sales reps and exec PS and for this. I’m going to turn over to William, to present that case.
00:06:32:09
William Mwiti
So, thank you, Jon, for introducing the first topic. It’s a real common scenario that most medical and commercial teams are aware of. This regards the medical commercial interactions. During our workshop, we did discuss our particular profession. And this is around stats represented, with the MSLs visiting HCP together. But for the purpose of this case, we changed it a bit. And we went into a Congress scenario and we looked at most senior leaders in the organization. Typically, this would be a Medical Affairs Manager. Or a Portfolio Marketing Director, and in this type of scenario, we’re looking within at an agreed Global Opinion Leader to discuss the current management of a certain type of disease. In the pre-launch phase, which is typically 2 to 4 years time, for our scenario, it was around 2 years time, and this was on the sidelines of an international congress. So, generally, medical commercial interaction in when Congress is in session, outside of Congress is in session,. or when entering the field, and there are typically risks that could arise, and this would obviously be related to the life cycle of the product. Whether we’re in the prelaunch phase, whether we’re after Launch, whether it’s data that’s label or off label, and these risks need to be. identified, and appropriate guardrails put in place to allow this kind of Interactions to take place. So, some of the key research we discussed in this example started with the perception of an appropriate promotion, because this particular products are coming through in the next 2 years, they are unlikely to be registered. Obviously, everything you are talking about in that space is in the confines of a pre-approval set of rules. And this is one of those issues you can mitigate by having the teams move away from the commercial booths to a neutral location and limiting the conversation to the agreed topic understanding and hearing what the HCP wants to talk about, and also be particular and responding or providing information that’s related to the topic that has been agreed. As part of this conversation, the KOLs may ask questions about the new product. Where this falls within the Medical Information, and this aligns with too because the products are unlikely to be registered, we should make sure that they are in line with the Medical Information principles and processes. Some of this information that we’re seeking from the HCP might start falling within the rule of consultation, and this is a risk that you need to expect, and if a KOL feels they are providing information that might warrant a fee for some with payment, you need to decide upfront, and pay the money, these expectations beforehand without argument or agree that this call will not meet the requirements for agreement. So, that there are no expectations, for payment for this type of conversations that are happening. Second Congresses also might have rules that also might govern some of this meetings including we have the meetings content place or where the conversations can take place. So, as a mitigation measure, it’s always important to check with the Congress to see what kind of rules that we have. And ensure Compliance including where the meetings can take place. Obviously, the different countries where Congresses can occur, the different countries might have different sets of Code of Practice. It’s important to understand the different codes of Practice for the different countries, and apply these for these types of meetings, and when in doubt, always consult with the lot of medical where these Congresses are taking place to ensure that there is compliance. Additionally, for some of this conversations, we might have them taking place in a dining room, in a place where you need to be seated, at a time when normally the HCP or medical staff or commercial staff would be taking a meal. And thus potential for the transferal of money that needs to be reported. It’s important that we understand what the fair market value for that market looks like, and even as we try to provide this hospitality, it does need to be reported where the lawful regulations need this to be done. And where your company policy also require this to be done. So, as a mitigation measure, it’s important that we adhere to the national, the state and the local laws and regulations. And also adhere to the company policies and procedures. It’s important to document and disclose this legitimate transferral of values so in the future we are able to show that these interactions were required. They were necessary, at that point in time, and they were conducted in accordance with all policies, procedures, local regulations and codes. So, for this example, we looked at a typical engagement that you’d have between medical and commercial in the presence of a Key Opinion Leader for a medication that has yet to be launched. We’ve seen what these risks would look like and how we’re able to mitigate them to ensure that we have the appropriate guardrails for this type of engagement, and the meeting goes on as planned. So, thank you, Jon, over to the next
00:12:29:17
Jon Dixon
Okay. Thank you. William. Let’s move straightaway into the second case, which is around patient support programs. And for this, I’m going to turn to Anthony to share his wisdom.
00:12:42:01
Anthony Scott Greco
Yeah. Thank you. Jon. And thank you for that, overview of the various different principles we need to think about. We’ll look at a case study involving patient support and work to apply those principles to this example. Because, as we know, we can create our policies and our principles as best we can. But until we actually get out there, there’s always going to be differences and nuances that we need to evaluate. So having a core set of principles to think about in terms of legitimacy and intent and transparency and proportionality is so important as we think about these different risk areas. So for patient support in particular, when we’ve had this discussion at at some of the MAPS conferences, we asked the teams, the Medical Affairs participants, where within their, organizations did their patient support programs set. Was it primarily medical that was involved with patient support programs? Was commercial involved primarily? Was it a combination of those of those two functions as well as other functions? And what we found is it really is a combination of several different functions, right, depending on the type and nature of the program. We would expect medical to be involved in certain types of, of programs, particularly those that are providing education or information related to, you know, related to the disease. But we also see, you know, the role of commercial in terms of some access related programs and other, other types of support programs that patients may or may may receive from, from our from our companies. So it really was a combination of several different functions in terms of the scenario that we want to look at today. The one we put forward to the groups was really around this concept of a new product, a respiratory inhaler product for COPD, in which marketing is considering providing training, support for patients on how to use the product as well as, with. And this training would include online video demonstrations and even an access to a dedicated nurse to answer questions about how to use the product. Right. And discuss the inhaler, the medicine, and the medicine. So, of course, this type of program has several risks that we need to think about. And in, in in the course of the conversations we talked about, you know, both the online, the risks associated with providing an online video demonstration, which needs to be created, launched and then managed, as well as the when we get into having interactions between the patients and a representative of our company, of the company. So as we think about those two risks within, the patient support area, we came up with a few key risks and then ways to mitigate. First, as we think about, the interactions themselves. Right. And having a nurse, educator or a nurse providing specific information to a patient, we always need to think about inappropriate, inappropriate prescribing bias or promotion directly to patients. Right. We want to make sure that we are only providing the approved messaging and information and not necessarily, providing, attempting to promote the product to the patients. So again, we only want to introduce this PSP and the value of this PSP to patients post prescription. Right. We don’t want to use this as a tool to help promote. We want to think about whether the patient is eligible to participate in the program. So they have the prescription to they meet other requirements, before they gain access to the PSP. Both the training as well as the nurse, the nurse themselves. The second key one is the role of the nurse educator. Right. Often these individuals are, you know, they they would be medically trained, right? They’re nurses. That we’ve hired to provide support. And so there’s always the risk that they would potentially provide medical advice to the patient. And given their role and the and the fact that they’re representing, a manufacturer or an organization. We want to, ensure that they are not providing that medical advice and rather just providing the information that’s necessary. So, we want to ensure that that nurse is trained on potential scenarios and how to address them and keep it within the guardrails of the information that they’re allowed to provide. Certainly, then also periodic monitoring of those nurse educator interactions is important so that we can ensure that the nurses understand their role and are, and are complying with the different requirements and policies we’ve set up for them. The third key risk, of course, again, in terms of that nurse educator interaction is, failure to report adverse events, right? In the course of those conversations, a patient may identify an adverse event in related relation to the product. And of course, pharmacovigilance needs to be alerted to those, in a set timeline. And so we want to ensure that there’s processes to, report and then review those at ease. So again, having the appropriate processes and then training the nurses on how to collect that information is going to be critical. We have data privacy issues as well. Certainly we’re dealing with patients. So as we administer the program and if we are, for example, hiring a third party to oversee or to administer the program on our behalf, we want to ensure that they have the appropriate data privacy controls and requirements built into their processes and systems. And finally, there’s that legal liability right to to any of these. We want to ensure that, you know, there’s a legal review of any SPS as part of the company’s broader procedures. So those are generally some of the risks that we think about for patient support and for patient interactions, and some of the ways to mitigate.
00:19:48:08
Jon Dixon
Thank you very much, Anthony. Yeah. Now. Very clear. Thank you very much. I think patient support programs are one of those areas where the types of program you can think of is, you know, there are many different formats, many different approaches. And no two tsps are ever going to be the same. And so it’s a it’s an excellent example of where you really do have to think about the risks and, and work it through each time you get there. Just, you know, ssps that are, going to be too similar to the last one. So, so, yeah, very important example. Okay. So the third example is around, advice seeking. And in the workshops, we asked the attendees about the thorny question of whether or not commercial should participate in advisory boards. And it was interesting that we had a whole range of responses to this. There were some that were ultra strict and said, no, no commercial can attend a medically led advisory board. Equally, at the other end of the spectrum. There were people who said, yeah, and anybody pretty much can attend, no restrictions, but there were a lot in between, where they thought that a commercial person could attend as an observer. Where only executive marketing leaders might be able to participate. And others where only if you had an active role. So quite a mixed picture here. Personally, I would argue that providing you have one process that everyone needs to follow for advice seeking for an advisory board. Then, it doesn’t matter what your function is, because, you’re all following the same procedures, but there are certain questions that need to be asked in terms of the relevance of having certain people, in the advisory board. So if we think about risks for a minute before we look at a particular scenario, there are some risks which would arguably apply to any advisory board, regardless of the composition. And regardless of the advice being sought. So, for example, usually an advisory is going to be, paid for their service. And there will be contracted and there’ll be a written agreement. And that allows you to manage expectations about things like fair market value, payments. But also the expectations, of the advisor, that could be potential transfers, value depending on the, length of the advisory board. For example, if there were refreshments or a meal. Then what about the local laws and regulations, around such things? And if you’re in the US, you even have to consider which state you are organizing the advisory board in, because the, requirements may vary from state to state. And, you know, perhaps discussion might stray from the actual advice, that you are seeking. And therefore, this is where the it’s important that all the company attendees are trained and briefed so that they understand what they can and cannot do. In the advisory board. Now, if you add in some commercial representation in some shape or form, then you’re going to add other risks. So let’s take an example, of a company with a new medicine in development. And the company is looking for more input to better understand how the emerging product profile, can best meet the needs of patients. And let’s say that commercial leadership actually request, a joint advisory board between commercial and medical so that both parties, can get the advice they need around this new product. And hence, maximize the opportunity. And let’s assume, that this is going to be a cross-border advisory board, for the top five European markets, which would typically be Germany, France, Spain, Italy and the UK. Now, certainly we’ve added some layers here. We’ve now got a commercial medical combination. We’ve got five different countries where the experts are going to reside and so on. So, we start thinking about other risks, in a more, sort of obvious way. So, for example, there is a risk of actual or perceived, pre-approval promotion. So you have to be very careful that the materials and the communications, are clearly non promotional and have appropriate, review and approval. Advisers may be proposed by commercial, but they may do it on inappropriate criteria. Advisers should always be selected based on clear, objective criteria and on their expertise in relation to the advice you are seeking. There could be seen to be some undue influence on the advisers. Or or perhaps pressure from the commercial staff, on the medical staff, to even include certain people. And so we have we have to be very careful, about matters like that. And I think some of the key practical points here are being very clear on roles and responsibilities to ensure, that the company has a united approach to the advisory board, that every individual is transparent about their function. Member transparency. One of the key principles, we need to be very clear on the advice that is being sought. That must be very clear beforehand. And which of the two, functions are going to be leading discussion on the particular topic in question? Engagement of local staff in this case. So we’ve got five countries involved engaging with the local staff in each of those countries is quite important. First of all, to make sure that they understand that, one of the experts is being approached. Secondly, that they are allowed to be advisers and perhaps thirdly, to make sure that that person, in their country is appropriately looked after and managed to make sure they’re available for the advisory board. But the one thing we haven’t mentioned here is how we’re going to do this advisory board, if it’s face to face, the traditional one, that’s one thing. If it’s by videoconference, that might be fine or some sort of digital platform which may even, lead to some sort of asynchronous, approach. Each of those formats have somewhat different risks, and they need to be thought through. And most importantly of all, perhaps, is that there should be no coercion from commercial on the advisers selection. Because commercial may want inappropriate people. We have to make sure that, as I said before, the selection, it’s based on objective criteria. So even with a relatively straightforward scenario, a lot of risks to think about and to mitigate. But if it’s carefully thought through, no reason why such an advisory board could not occur. So with that, I think we’ll move on to the fourth and final scenario, which is the use of AI. Jessica.
00:27:52:12
Jessica Santos
Thank you. Jon. Hi. There is a very popular topic right now. I was not even existed ten years ago. And now everybody is asking the same question. How should I use it? It’s not. Well, I use it anymore. It’s about. How do I use it? Because you will be saying. It is a technology that’s extremely powerful and very useful and very interesting as well. So in the last couple of years that AI legislation has developed a really fast. Most of the UN governance framework is slower than the technology development. But this time we do see the regulators are very hands on. So there are you I act on the white House, executive orders and different state laws and AI legislations in Canada, Australia, Asia Pacific. Pretty much everywhere. And so it changes very fast as well. So who is responsible and who is liable? That probably is the most important the question amongst everybody who’s going to deploy AI. So there’s three different players in every scenario. Usually provider deploy and the user have a hypothetical scenario. You are employed by company A and you will use some information of company A, put that in charge of and ask generate a summary for me in that in that scenario. ChatGPT which is open AI, is the provider that is the providing the platform and your company. A is the deployer and the you yourself is the user. So it depends on the jurisdiction and how do you use it, etc.. Every rule will change in the conference. We ask people this question what type of AI use cases has your organization explored within Medical Affairs? We give people the choice of medical intelligence, which is using AI to perform targeted and efficient searches of relevant scientific literature. Second is medical insight that is analyzed field team observations and other inside data to develop a tangible and actionable insights. Third is medical information that is, generate medical information and responses to unsolicited questions. And last one medical communication that is deliver new medical content to HCP is based on profile and the profile profile request or something else. And then you can choose as many options as you like. And to our surprise, more than half or almost 70% of the people in the audience choose medical intelligence that is already widely used. We’re using that as a search function. Very, very popular. After that, about a third each choose medical insight, medical information and the medical communication. So, it is really on the rise and it will be a technology to use. But just like I double edge to saw very powerful also very risky as well. So we give everybody this use case. So hypothetically a Medical Affairs team has designed an I use case that will support the generation of scientific information related to recent studies involving their new product. This AI generated information will be used to support medical study and the publication plans, scientific exchange content generation and the medical information responses. Therefore, information generated through the I use cases will be used both internally as well as part of the external interactions with HCP and the other stakeholders. So a question is what are the inherent inherent risks in leveraging AI to generate scientific information as opposed to human, which we traditionally do? Second is how do the risk change when I use case shift from internal to external use? So instead of we just going to ask it to see what it comes up with as a reference, not nothing goes outside whatsoever. Or you are actually going to use some of the findings to do external publication, for example, or use different AI platform, maybe an internal one versus a vendor supply, the one open AI versus close AI. There’s so many different options. Third is what are the primary mitigation techniques to address AI related risk? And finally, how does the mitigation change based on the use cases like internal planning, scientific exchange content to generation, digital communication or medical information responses? By all means, there’s no exclusive mitigation findings or very exhaustive list of risks. We identify. But then very commonly the common risk is confidentiality, data privacy, potential bias. How do we meet so many different regulatory requirement, inaccurate or hallucination, etc., etc.. So here are some possible mitigation actions that related to different risk. So first of all, if it’s a failure to ensure data privacy or information security, that is a very common concerns for AI, then maybe we should use anonymized data, anonymize the data where possible, and then to enable appropriate privacy and data protection controls. So whatever you load into the AI platform doesn’t matter. It’s an open AI or closed AI or who is controlling it can get quite complex. Before that, make sure your training data set is already anonymized and there no personal data in it. That’s a good starting point. Second, confidentiality or IP intellectual property. There are some very high profile law cases going on right now. Is all around the use of confidentiality and IP protection. So who is really responsible for IP and is the outcome or the prompt governed by IP law or not etc. etc.. So to to make yourself more comfortable about this topic is first of all use closed AI instead of open AI. Maybe use device specific AI instead of a public connected AI, which, you know, the second you load everything in it, it’s out. It’s out in the public and you have no control anymore. So either way, conduct a AI risk assessment is very useful. There’s many regulators already put out a risk assessment framework which you can use as a reference there. UK, EU, USA quite a few like you can use. Third one is potential bias or lack of fair balance. As anybody practice Medical Affairs or risk management balancing bias make sure it’s fair. And all of these it’s a it’s a balancing act. So you want to make sure that you use diverse and representative dataset to train AI model or conduct a regular audits or outputs, whatever it’s possible. We know some data set could be quite restricted or quite small universe to start with, and the one of the option is to make sure that you recognize that potential bias in the training dataset already and, weighted up or balance it in advance and then make some judgment for the outcome. It comes up after that. What about the regulatory requirements? There’s so many of them. And then changing almost on the daily basis. So transparency, accountability and explainability of the AI model. It’s all comes into the play. Most of the AI regulation is centered around risk management, which is can you explain to me what is your AI model doing? Who is accountable? What is in? What is out? Who is checking? Do you do reliability check, validity check, etc. and none of these things goes away and you are just using some tools to make sure that your processes are getting better, but the liability does not go away. What about inaccurate and misleading information or hallucination? Well, there’s no easy way other than continuously monitor AI system performance with quality control metrics and establish processes for human oversight. This thing about the human in the loop is very common in most of the legislations you’re looking to. So we cannot give machine overall accountability and no matter what. There will always be a human in the loop, especially for the high risk activities. Unfortunately, everything related to healthcare, to medicine, to biotech life science, none of them are medium or low risk. Everything we do is high risk or extremely high risk. The other possible risk we identified is inclusion of off label or untruth content in proactive and or promotional communications. That is something very interesting because we can control a human. Even we do all human interactions. It’s very difficult. We do a lot of training, but these things do slip. Even the machine comes in. How do we control it at all? So that is the governance over I use case that define how AI content would be used was defined. The guardrails come in and we want to make sure all the I mean they can they do direct communication with each or external stakeholders or not. Yes or no. If it’s yes risk profile goes up a lot higher. Who is controlling it? Who is consulting it? Who is letting it go? So these are some risks and the potential mitigations we discussed and the suggested. But by all no, by by no means the exhaustive list. Thank you, Jon. And passing back.
00:39:05:16
Jon Dixon
Thank you. Jessica, this is such an exciting, but equally a bit scary sort of field. And clearly, as you’ve been illustrated very nicely, very sort of rapidly evolving. So I’m sure we’ll have to get you back again in a few months to, to provide a, an update with, with where we’ve got to. But look, we’ve covered four examples. We could certainly come up with more. And I think we all recognize the importance of the medical commercial interface, but hopefully now also recognize the importance of making sure that what you’re doing is legal in line with the regulations, in line with the relevant codes of practice, your company policies and their SOPs. But then have the right conversations with legal or compliance or subject matter experts as appropriate. And then think about the, the risks and how you mitigate them, and particularly activities which, multi-dimensional, as indeed, in fact, most of our scenarios have been and particularly, the patient support programs. And ultimately you have to apply judgment and get on and do things. So hopefully, you found this helpful. I’d like to thank, William, Anthony and Jessica for their excellent contributions. And just remember that Medical Affairs should adopt a risk mitigation mindset, because then you will be showing leadership in this space and help to keep your, your company and your colleagues on the right path. Thank you.