In this podcast, Lauren Tulloch, Vice President and Managing Director, CCC, addresses the unique challenges facing medical affairs professionals whose daily work with scientific articles and other published content increasingly intersects with AI and complex copyright considerations.

Key Takeaways:

  • AI adoption requires strong governance
    Medical Affairs teams can gain efficiency from AI, but success depends on responsible governance—especially managing copyright and legal risks.
  • Copyright in AI is complex and often misunderstood
    Using scientific literature with AI introduces legal challenges, and common misconceptions can lead to compliance issues if not properly addressed.
  • Practical strategies are essential for safe implementation
    Teams need to carefully evaluate, design, and deploy AI tools with clear strategies to balance innovation with copyright compliance

Speaker: Lauren Tulloch

Speaker: Lauren Tulloch

Vice President and Managing Director at Copyright Clearance Center

Following is an automated transcription provided by otter.ai. Please excuse inaccuracies.

00;00;00;10

Garth Sundem

Welcome to this episode of the Medical Affairs Professional Society podcast series: “Elevate”. I’m your host, Garth Sundem, and today we’re talking about copyright and AI. Joining us is Lauren Tulloch, Vice President and Managing Director at Copyright Clearance Center. So, Lauren, we were chatting earlier and I hear that you just got back from Berlin, where you were presenting almost exactly on this topic. So, can you tell us why copyright is the central issue or a central issue in this conversation about AI?

00;00;38;21

Lauren Tulloch

Yeah, absolutely. So I think across industries, not just for metaphors professionals, but across industries, employees are using AI for many, many day to day tasks, such as writing or editing emails or taking notes and meetings. But now what we’re starting to hear and see, and what I heard for the two days in Berlin, is that the real time savings and efficiency and leap forward is with the true day to day workflows of professionals and what they’re what they’re doing more than just, you know, taking notes and meetings and editing emails. And what we heard loud and clear at the event in Berlin and at many other events that we have attended, is that folks are using AI to transform what they do. And of course, Medical Affairs professionals are using scientific content every day. They’re relying on copyrighted information in order to do their jobs most effectively. And so that’s where obviously copyright comes into play, because when you are using an AI system, you’re often uploading materials. You’re making or copying and pasting. You’re doing things that invoke that copyrighted content so that you can make sure that you have accurate, up to date the best information. And the way that you’re going to do that is through authoritative content from scientific journals. Yeah, conferences, things like that.

00;02;10;28

Garth Sundem

Okay. So we’re not only using it to make our lives easier in terms of writing the emails and writing the bedtime story that we read to our kids. But we’re using AI now in professional ways. We’re seeing that, of course. And I mean, specifically in in media, that seems like one place that that we’re seeing a lot of this. Is that what you’re seeing as well?

00;02;34;12

Lauren Tulloch

Yeah, that’s what we’re seeing. We’re seeing AI tools being used, for example, to summarize materials. Right. So not to summarize materials and to improve the discovery process. So finding the materials more effectively, doing an initial screening and letting the AI assist with that initial screening and then having the professional come in at that point, once the papers have been identified, and then using AI even to summarize the findings, to query the materials for the bits and pieces that are most important. Always, we heard so loud and clear at the events that I was just at, as well as many others, always with the human in the loop or now the phrase I’m starting to hear as human in command, which is kind of an evolution of the human in the loop. So that piece is critical. But there were so many stories about how AI can be used to get answers faster. You know, get the response out to the CP or to a patient more quickly, but still with that high level of quality.

00;03;38;28

Garth Sundem

So people are starting to understand how to use AI in these situations. Are people still misunderstanding how to use AI in these situations and specifically specifically with copyright? What are what are people misunderstanding? Do you think in this area?

00;03;57;10

Lauren Tulloch

Yeah, it’s a good question. So well, one thing I one baseline I try to explain when it comes to copyright is that there’s like two main components that you want to be thinking about when you’re thinking about using an AI system and using copyrighted content. The first is the on the input side. So there’s two main types of input. The first which we won’t spend too much time on, but just a little bit, is the initial training of all of these AI models. The initial training of these models used tremendous amounts of copyrighted content and other content to to teach the models how to language. Right? Yeah. And there’s copies of that content were made and copies of that content are scored stored in the AI itself. So that’s like one type of, of, of, of input. A subset of that is what you’ll hear people call fine tuning, which is okay, now I’ve got the big system, but I want to further refine it for a particular task or a particular subject area or something like that, with additional content that will make it essentially smarter. Right? Because I’m going to take a subset of content, and I’m going to say this is like the the most important content for either this task or this function or whatever. But the same thing applies there. Copies are made, the system stores them and uses them to to make better answers. So those are kind of the that’s one type of input. The next type of input is actually in the process of using the tool. So that’s really where this comes into play for metaphors professionals. Right now I’m my my company has chosen a particular tool or another. And now I’m using it. And I want to actually make sure I find all the articles on a particular topic, you know, answer my particular question around efficacy or whatever. And in that case, I might be uploading the content, I might be copying and pasting, etc.. So that’s another place where our copy is being made. And then the second part is the output. So now that I have uploaded something into the system, what am I doing with it? Am I creating a summary? Am I what am I doing with that output? And depending on what I’m doing with that output, maybe I’m just using it internally, or maybe I might be actually sending something outside of my organization. So that’s where you have to consider the copyright on that side as well. And the outputs, depending on how much of the original work is included in the output, they could be considered a derivative of the original work or something like that, which is also a copyright consideration.

00;06;41;21

Garth Sundem

Right. And I imagine the format and how you use it to if you’re a field medical professional and you’re using it as your background, that you’re then going to be communicating, you know, that would be different than providing that in some sort of.

00;06;56;12

Lauren Tulloch

Exactly.

00;06;57;05

Garth Sundem

Something to to that HTP directly. Boy, there are so many there’s so many lines that it seems like we should be aware of when we’re crossing in all these different areas. What does what does good governance look like now? I mean, especially in life sciences. How do we how do we I don’t want to just say put guardrails around, but how do we how do we govern the use of AI in our companies these days?

00;07;29;02

Lauren Tulloch

Yeah, I think and I think when you think about governance writ large to the right, there’s so many components of that, which is one of the reasons that the copyright piece can sometimes get lost. Right. Because organizations are also thinking about security. They’re thinking about bias and discrimination, privacy, transparency, all sorts of things.

00;07;47;19

Garth Sundem

And innovation. I mean, they have to use it. We have to use it. We can’t not. Right.

00;07;51;27

Lauren Tulloch

Right, exactly. Yeah. Right. No, that’s a really good point. And that. Yes, exactly. So how do we use it effectively? How do we put up guardrails that allow us to to do what we need to do? I think on the copyright side, thinking about the licenses and the permissions that are in place and whether those need to be enhanced in some way. Right. What agreements do you have with very with various content providers, does do those ingredients cover the use of materials with AI, or do you need some additional licensing in place? And you can do that licensing directly with individual rights holders or through collective like.

00;08;35;22

Garth Sundem

Oh, oh that’s interesting. Let me just make sure I understand what you’re saying. So through CC or through individual rights holders, you could purchase some sort of AI use clause or whatever you want to call it, where where you say, okay, you published all this stuff. That’s great. Now, with this additional permission that we’re going to purchase, we can do whatever we want with AI to it.

00;09;04;01

Lauren Tulloch

Or we can do this particular set of things, right. So, for example, with licenses, we have been working over the last couple of years to we have a license that we’ve offered for several decades that allow organizations to share content internally.

00;09;21;08

Garth Sundem

Yeah, okay.

00;09;22;16

Lauren Tulloch

Within their organization. So saving things to SharePoint, making copies to bring to a meeting and then some limited external uses as well, like submitting to a regulatory authority, etc.. So what license does is it makes it so that we have a set of consistent rights across all of the participating publishers for those types of internal use cases and some very occasional external use cases. What we’ve done over the last couple of years is we have started to expand that to internal AI use. So some of those examples that I gave earlier, I have an internal AI application, and I want to fine tune it with a set of materials to make it better at the job that I’m trying to get it to do. That would be one example or the simpler example. I just found ten articles. I don’t have time to read them all. I want to summarize those ten articles, or I want to to query them and get particular bits and pieces of information out. Those types of uses are now included in our annual copyright license from participating publishers, which is an ongoing journey to bring more and more publishers into the into the fold on that.

00;10;34;15

Garth Sundem

That is interesting. Are there licenses now that allow a med info group to summarize broadly within published literature, and then use those answers in a public facing way?

00;10;51;15

Lauren Tulloch

Yeah, it’s a good question. Our license allows for the summarization and the internal internal use. And then there does need to be a human in the loop. And if if it’s used externally it may require additional permission depending depending on the level of copyrighted content included in under in our license. Essentially, you cannot simply take the output and and push it externally, but if you’re creating a summary more quickly than you did before and then still adding, you know, using that summary to make to write your own response and critical is would that response require permission anyway or not? If not, you can got to carry on. If it would, then you might need it. You know, for example, if you were going to lift a graph or something like that, that still requires permission.

00;11;47;29

Garth Sundem

Yeah. Of course. So just to be really specific for a med info group, they would be able to use AI to summarize the data. Yeah. And then a best practice next step would be for them to write their own response.

00;12;05;03

Lauren Tulloch

Correct. Yeah exactly. And they still get a significant amount of efficiency right by doing that. But then yeah you got it exactly right.

00;12;13;25

Garth Sundem

Oh man. Let’s have this conversation a year from now and see if that’s the case. But anyway, so how worried are you about using AI in this way? I’m thinking, you know, it comes from a black box. We’ve heard that phrase, and maybe it still comes from the black box with hallucinations. What would you recommend to this hypothetical med info team? Who is going to use AI to summarize these data?

00;12;38;22

Lauren Tulloch

Yeah, I think it’s a great question. And what we’re what we’re hearing is that as organizations are choosing a tool, they’re they’re getting better and better for sure. But in order for you to really feel confident, you need a tool that’s actually going to point you back to where it got the answers from. So that’s one of the things that that in developing our own tools, has really been keeping in mind, right. I need to be able to trust the AI, and I need to be able to track back to where to where did that answer come from, etc.. So I think that’s critical to kind of open the black box. I don’t know if that’s what I learned, but to, you know, to add some transparency to that, to that black box, which is why I think as time marches on, people are starting to use more fit for purpose AI tools rather than the more sort of general on the market tools that don’t necessarily point me back to, you know, it’s fine if I’m asking for recommendations from my Disney vacation or something that I that the ChatGPT might not tell me exactly where it got that information. But when when I’m talking about, you know, whether or not a drug is effective for pregnancy, I want to trace that to where I’m getting that information. Right.

00;13;59;09

Garth Sundem

Yeah, I go to GPT for how to prune my peach tree.

00;14;02;27

Lauren Tulloch

But exactly. Tell you exactly why. It knows. You’re probably okay with that.

00;14;08;18

Garth Sundem

Okay with it. Okay, so back to our hypothetical med info team. Again, this is becoming the example for this conversation. But but it’s a good it’s a good space to talk about this. Right. Because they’re the ones on the front lines of. So would you recommend it as a best practice to this team. Okay. So the AI summarizes the data. We use a tool that forces the AI to cited sources.

00;14;33;13

Lauren Tulloch

Yeah.

00;14;33;27

Garth Sundem

You want that meta info team to go check back. Check the the summarization that is provided by the AI. Do you want them to go to each of this, those citations and say, oh yeah, this does square with what the AI told me.

00;14;49;24

Lauren Tulloch

Yeah, I think that’s a really good question. And I think it’s going to evolve over time as so much with the AI has. So I think to your point, you said you just said like ask me about it in a year. I think that’s also true for the deployment of a tool. Right. Like your QA is probably going to be more aggressive in the, you know, in the beginning stages versus the latter stages. But we’re always hearing, right, you do want that human in the loop. So you have to find a way to make sure that you’re you’re you have that QA, for lack of a better way of saying it in the process, I think.

00;15;25;01

Garth Sundem

Oh, that’s interesting. So maybe, maybe this team in the first six months after deployment would be sharing, you know, 100% of responses. And hey, if those are good, then they back down to 75%, back down to 50% of the responses, something like that. If if their QA is showing that the AI tool is there.

00;15;47;02

Lauren Tulloch

Yeah. If they’re getting the results that they feel, you know, there were a lot there was a lot of discussion at the meeting that I was just at about, you know, at what point does the AI get get good enough that it’s that it’s actually better than the human and I don’t I don’t know that we are there yet, but but we might get there. And somebody who is speaking at the event gave the analogy of the self-driving car and how, in fact, right now the self-driving cars are actually pretty good and probably better than humans, but we actually have a higher, even higher standard for when it’s not a human doing it. And I think that’s that’s probably a good thing, has a higher standard for the AI.

00;16;29;20 – 00;16;55;27

Garth Sundem

That’s a that’s a cool comparison. Well, speaking of a year from now, we as an industry certainly are looking for the governance structures, for the guardrails for how to use this AI. But we’re not the only ones looking at how we use these tools there. There are there are laws coming down the pipelines that are going to make some of these decisions for us, that are going to define some of these guardrails. What do you see that Medical Affairs leaders should keep their eyes on in terms of, I don’t know, lawsuits or oversight or things that are going to shape our uses for us.

00;17;08;25

Lauren Tulloch

Yeah, it’s so interesting. There are tons of court cases out there right now, and there are some potential really the most meaningful on the on the sort of legislative side is the EU AI act, I would say, which does cover all of those governance topics that we were talking about, including copyright, although copyrights really not as focus. But there are some some important provisions in there. On the court cases side, there are a lot out there. The initial court cases that have come down have not been consistent. And from a real practical perspective, for metaphors, professionals or really any other professionals waiting on the court cases is not going to be a winner, is it’ll take a long time for anything to. First of all, they’ll come out and be inconsistent, which we’ve already seen from the first few cases. They haven’t necessarily aligned with each other, and then they’ll make their way through the appeals court. And by that point, we’ll have time. We’ll have marched on. So I think that’s where things like licensing and frameworks come into place, where you, you put the uncertainty to bed because you say, okay, I’m going to get the appropriate licensing and permission so that I don’t have to wait for something that will go all the way through the courts, which will take many.

00;18;35;11

Garth Sundem

Years, and you don’t end up trying to figure out what sort of settlement you need to pay for having used things incorrectly, thinking about the anthropic settlement or something like that. Whoops. Okay, so there is or I should say, it seems like teams have various levels of exposure right now. I think there are some managers and directors and leaders who are purchasing additional antiperspirant due to their use of AI situations that they’re not quite sure is is is perfect in terms of copyright, but they want to innovate. So what do we do? What what can a Medical Affairs team do to reduce their exposure but still innovate?

00;19;37;11

Lauren Tulloch

Yeah, I, I probably will be a broken record on this one. But but it is like I do think it’s licensing. It’s talking to those content providers both you know, aggregated solutions like where we’re working with a lot of publishers at once and providing a collective solution, but also direct agreements with and explain what your use cases are and work work with them to say like this is I think rights holders are coming to terms with the fact that this is the wave of the future, and they want to support these workflows while still making sure that they’re appropriately compensated for their role in kind of, you know, the dissemination of of scientific material and other materials.

00;20;26;10

Garth Sundem

Well, it sounds like you’re recommending specificity to really say, here’s what we would like to use and here’s how we would like to use it. And transparency, you know, maybe not just, hey, we’d like to use AI with your stuff, but really talk about how how you want to use it and how you want to disseminate it, how you want it to be. Yep. Facing and externally facing, you know, conversation. Conversation. Yeah.

00;20;59;23

Lauren Tulloch

I think that’s critical.

00;21;01;12

Garth Sundem

Okay. So the takeaway what are we. What do you think Medical Affairs professionals should know about navigating AI and copyright? You know what. What do you want people to leave this conversation with?

00;21;22;11

Lauren Tulloch

Yeah, I think that’s great. I think the most important thing is to be curious and ask questions and to say, to think about the workflows that you’re supporting and what you want to do to innovate and then say, okay, what what materials are we using? Do we have the permission in place that we need. And that is generally going to involve some navigation, I think even within your own organization, either to legal or to like information management or both, to get a sense from them, because in a lot of cases they there may be some foundational licensing through us or through others that’s already in place. And then they just maybe need to build on that for their particular workflow. If that workflow, for example, might be might have an external use case, something like that. So I would say be curious and be collaborative within your organization because there may be you might, you may you likely don’t have to start from scratch. You want to you could likely can build on a foundation that’s already in place at your organization, and then fine tune it if needed for the particular either tool that you’re using or workflow that you’re supporting with, with AI.

00;22;37;06

Garth Sundem

It’s almost like, don’t be the black box yourself, right?

00;22;41;14

Lauren Tulloch

That’s great.

00;22;42;23

Garth Sundem

But don’t create another one within your organization.

00;22;45;00

Lauren Tulloch

Yes, exactly.

00;22;46;07

Garth Sundem

Transparent and collaborative and curious. That’s really I love the piece of curious on there. How can we use this? How can we use this? Yeah, in the right way. All right Lauren, well, thank you for joining us today. MAPS members, don’t forget to subscribe. And we hope you enjoyed this episode of the Medical Affairs Professional Society podcast series: “Elevate”.