• Join
  • Contact
  • 0Shopping Cart
Medical Affairs Professional Society
  • Home
  • About
    • Mission, Vision & Value
    • Leadership
    • Career Resources
    • Our Team
    • Policies
    • Bylaws
  • Knowledge Center
    • Standards & Guidance
  • eCADEMY
  • MAPS Community
    • Focus Area Working Groups
    • Awards
    • Chapters
    • Get Involved
    • Mentorship Program
  • Events
    • MAPA-MAPS Sydney Summit 2025
    • Upcoming Webinars
    • Chapter Events
  • MasterClass
  • Partners
    • Industry Partnership Program
    • Partner Circle Solutions
    • Media Planner
  • Job Postings
  • Membership
    • Join MAPS
    • Renew Your Membership
    • Access Your Profile
    • Membership FAQ
  • Menu Menu
  • LinkedIn
  • X
  • Instagram

The Role of AI in Academic Publishing: Opportunities and Challenges for Medical Education

Speaker: Dean Martin
https://www.linkedin.com/in/deantmartin/

Speaker: Dean Martin

Senior Manager, Reprints and Permissions, Wiley
Medical Affairs Professional Society
Speaker: David Flanagan
https://www.linkedin.com/in/dwflanagan/

Speaker: David Flanagan

VP, AI Services, Wiley
Medical Affairs Professional Society

How AI is being integrated into the academic publishing process. The current trends and advancements in AI technology are transforming the industry. Dean then David

§Content integrity in the age of AI and the importance of the lawful use of copyrighted content: How legitimate licensing relationships help preserve trust in an increasingly AI-enhanced publishing ecosystem. David

§Enhancing Editorial Workflows: How AI can streamline editorial workflows by automating mundane tasks, such as manuscript screening, plagiarism detection, and peer review processes. This allows editors and reviewers to focus on more critical aspects of the publishing process. Dave

§Ensuring Research Integrity: The role of AI in maintaining research integrity by detecting issues like data fabrication and ensuring the credibility of published work. Discuss the ongoing efforts to develop AI models that uphold ethical standards in academic publishing. Dean then Dave.

§Future Prospects: The future of AI in academic publishing. Explore the potential for further advancements and the impact they may have on the medical education field. Dave

Dean will take the lead on Wiley’s approach to licensing and permissions in the AI landscape, while David will guide the conversation around AI integration and research integrity, though they’ll likely bounce ideas off each other throughout the chat.

Dean, David, thank you for joining us today! To learn more about how your organization can partner with Wiley visit WILEY.com. MAPS members don’t forget to subscribe — and we hope you enjoyed this episode of the Medical Affairs Professional Society podcast series, Elevate.

Following is an automated transcription provided by otter.ai. Please excuse inaccuracies.

SPEAKERS

Dean Martin, Dave Flanagan

Garth Sundem 00:00

Welcome to this episode of the Medical Affairs Professional Society podcast series, Elevate. I’m your host, Garth Sundem, and today we’re talking about AI and academic publishing, with experts from Wiley. Joining us are Dean Martin, Senior Manager, Reprints and Permissions at Wiley, and Dave Flanagan, VP of AI Services at Wiley. This episode is sponsored by Wiley. So at MAPS, we have chatted about AI in many contexts, but not yet in the context of academic publishing and so Dean, can you catch us up with the landscape to get us started? What is AI in academic publishing, and what we’re talking about here?

Dean Martin  00:49

So here we’re talking about AI represents an opportunity to accelerate the pace of innovation and research learning and academic publishing. So AI is reshaping that landscape. So it’s not just about the speed of things. It’s necessarily about making sure the content is being that’s being relied on is accurate. It’s evidence based. It’s tailored to what healthcare professionals actually need. So for medical affairs team, that means better tools to support education and be confident with those tools.

Garth Sundem

Okay? And so now, with AI and academic publishing, instead of going to the journals, we just go to chat GPT and ask it the question and go from there.

Dean Martin  01:28

Well, that that’s one route that you could take, but it’s also a case of making sure that we’ve got that that human level of authority, authentication of what’s coming out. So you can also only use AI to augment what you’re doing and not replace what you’re doing. I think is kind of the most important thing,

Garth Sundem 01:50

You’re too kind Dean! You can just say No — that’s not what we’re recommending with AI and publishing here. But thank you for your consideration. David, is that your understanding of AI and academic publishing as well, not just accelerating the pace of publishing, but also looking to research integrity and how AI can be used to enhance academic publishing,

Dave Flanagan  02:13

absolutely. It’s transforming everything that we do. It’s transforming how we think about publishing, how we think about the research communication process in general. I mean, I can tell you, for the last two or three years, the only thing that people want to talk to me about has been either chatgpt or research integrity. And so really digging into this has been important. And I personally think that that AI, especially generative AI, is going to be as transformative as the web browser and the mobile phone were in their time periods. And it’s not, you know, just how it affects society in general, but how it’s going to affect, you know, how do we communicate research results? How do we identify what’s interesting, what’s important, what’s novel, in the integrity case, what’s real, and it’s we’re putting it into lots and lots of processes where people can still guide the processes and people are still in charge, but we’re giving our authors or editors or reviewers our readers tools to make them more effective with all this technology that’s developing at a blisteringly fast pace.

Garth Sundem 03:29

Okay, so let’s go to that content in integrity. And it seems like the history of academic publishing has brought us to a point wherepeople should have trust in what comes out due to the process and oversight that these studies and content have gone through in the academic publishing world. So now that we are in the age of AI, how do we ensure content integrity and and that we’re using sources that are legitimate, and how is our integrity maintained, even in the age of AI. David, what do you think?

Dave Flanagan 04:26

Yeah, it’s a great question. I mean, you know, one solution is, okay, we’re not going to use AI. We’re going to ban it. We can say no, AI in anything that we publish. And I think that’d be really short sighted. Patton Oswalt, the comedian, had a great, you know, line about, you know, AI is like a tool, like any other tool. It’s like, like an ax can be a tool. You can, you know, chop wood with an ax. That’s a productive use of that tool. Or you can drop the ax on your foot, and that’s a less productive use of it. Or you can chase somebody with it; you know, actively using it for malice. And in a way, like Gen AI is the same thing. There are good uses for it, there are bad uses for it. It depends on what the intent is. So you could take the route of saying, you know, we’re not going to allow Gen AI and any of our, you know, submissions and things like that. But instead, we’re trying to be more thoughtful about it and trying to figure out, how can you use generative AI to improve, you know, things, there’s, there’s, there’s a potential. There’s a case where, you know, if you can use generative AI to create content, and that content is virtually indistinguishable from other content that’s being published, there’s a risk there that, you know, people might be trying to publish content that is isn’t reliable. But there are other indicators besides whether or not it’s Gen AI, there are other indicators about whether or not something is reliable. And so we look at a whole range of of indicators and signals, and even use AI for some of that. But I guess the bigger you know, the outcome is like, what are the what are the good things that can come out of this? How can we use it productively and not publish, you know, incorrect or unreliable data that’s been generated by these tools.

Garth Sundem 06:25

And let’s get into the workflow of AI enhanced publications. You know, editorial processes in a second, but you know, when I go to GPT or something and I ask it a question, it’s drawing on many sources, some of which we would like to allow GPT to use, and some of which we may not but let’s, let’s talk about licensing relationships really quickly in this AI enhanced publishing ecosystem. So is, is this still a relationship that is relevant, even in the AI world. What do you think?

Dean Martin 07:08

Yeah, well, I’d say this is something that’s important. The The main thing is that the evidence has to be quality and have that assurance. So having that licensing partnership allows that to happen, to make sure that we can have license again, that quality assurance evidence that medical professionals really rely on, and that whatever you’re putting into the system will help contribute to the quality that you then put out of that chat, GPT or whatever AI that you use. So it’s to making sure that the licensing has the right content, keeps that integrity and keeps the quality and make sure that we’re it’s approved and has that right language and evidence base that can hopefully further that research and, again, accelerate the pace of research that outputs that we have.

Garth Sundem

All right, so even if we’re asking AI to synthesize a data source, we would like it to be synthesizing data from legitimate sources, from vetted sources, and licensing relationship then ensures that that AI is is synthesizing from what you wanted to draw from. Is that right, Dean?

Dean Martin

yeah, it’s about building that trust and making sure that you know the legitimacy of where that content is coming from. Because, again, as much as you might be running a trial or a test, you still want it to be based on the evidence that will still produce the right real world evidence and strategies that will come out from those problems that you’re putting in and use it as searching that data. That’s incredible, essentially,

Garth Sundem

okay, well, let’s talk about the editorial workflow and how AI is being used by publishers specifically in that workflow. I mean, I know I use it to write my emails, but And remember, we have mostly a pharma audience here, but I think they could inform themselves with the knowledge of how publishers are using AI so that maybe our pharma audience can better place their their research. Dave, how, how are publishers using AI in their editorial workflows?

Dave Flanagan 09:19

Right. So think about, you know what happens when you submit a paper to a journal? First you’re going to, I mean, obviously you’re going to do the research, you’re going to write it up, and then you’re looking for a place to publish it. Maybe you already have place in hand, in mind. You’re going to identify where you want to send the paper to, and you’re going to do a lot of work to upload it and submit it and get it into the journals inbox. The editor is going to, you know, read your paper, try to figure out, Is this appropriate for my journal? Is it novel? Is it interesting, and who are the right people to, you know, really dig into the claims that are being made in it. And and figure out if it’s believable or not, and maybe give some suggestions for improving it. Then your paper is going to be, you know, revised, maybe hopefully just once, if at all, you know, accepted, published online, all those different stages. Right now, or five years ago, you had people doing but right now you can have aI assisting us to do those different things. So when you are trying to figure out where to send your paper to, we have AI to help you select journals that are a good match for your paper based on what it’s a bio drug. Maybe you know what your interests are. We have AI to take a look at what you have submitted and help you to make sure that you have everything that you need. Like, do you have your funding statements so you get funded for your next paper? Do you have all valid citations, or any of the citations you know mislabeled, or anything like that? We helped the editor to find reviewers that’s constantly the or consistently the most painful, uh, process for an editor is finding enough, you know, reviewers. When, when I started an editorial, you would look for like, you know, five people to review a paper, and maybe you get three responses back. And now I hear about having to send out, you know, 15 or 20, requests they get enough reviews for a paper. It’s crazy. So we help with that. And then throughout the entire process, we’re trying to figure out, how do we keep the data and the information in the manuscript intact for as long as possible? So if I’m writing a drug discovery paper, maybe I want to be able to keep the structures intact. I want to keep the data that’s coming off the instrument intact and not flatten it into PDFs. And so we’re trying to figure out how we can do that, so that way we have a more useful paper as a database source downstream that can be aggregated into, you know, larger systems or your own personal Gen AI tool, and mixed in with your data and bubble data.

Garth Sundem 12:15

That’s interesting. So, you know, I was imagining AI. I mean, you hear the word replacing, and I know that’s an overstatement in so many cases, but I was thinking of AI becoming the first step editor for a submitter. But it sounds like in your model, you’re talking about AI more as a third collaborator. You have the submitter, you have the editor, and then you have AI as sort of a third collaborator helping to guide the process. Is that? How you see it?

Dave Flanagan 12:42

Yeah, I think when I think of GenAI, I think of it as a thinking partner, so something that I can bounce ideas off of and use to do the drudgery work that I don’t want to be doing, like finding reviewers. And for example, a paper might have a statistical study in it, and it’s notoriously difficult to get a stats editor who can go through and look at the statistics and make sure that it’s all correct and it has the right statistical power and the conclusions that you’re making about and things like that. That’s a sort of really, you know, tedious work that I would love for a Gen AI or an AI to be able to do consistently and well, to assist the editor, so that way, we don’t have a person having to do that. So I’m not talking about taking the the interesting creative things away from people. It’s like, it’s it’s like with when you see like Gen AI art. And people say, Why did we automate the art before we automated doing the plumbing? We want to make the plumbing a job that you have to worry about anymore, because the computer is taking care of that. Yeah, okay, and right. There’s a lot of plumbing in academic publishing as well.

Garth Sundem

Well, okay, so we talked about content integrity in AI, but there are also AI uses in ensuring research integrity, which I think is an interesting double check. I have a freshman in college, and I know that whenever they submit anything, the Gen AI is checking for the use of Gen AI. And I wonder, Dean, if you can tell us a little bit about how AI is being used to ensure research integrity.

Dean Martin 14:27

Yeah, so it’s all about being open about where how AI works and where the data comes from. Again, to mention about building that trust. And we want, as publishers, we want to license content to i ai creators make sure it kind of establishes that accountability to make sure that’s where it’s coming from. So this relationship gives publishes a voice in in how it’s being used, rather than just being used without proper oversight. So and we also appreciate there are like bad actors who might try and hide their identities and try and infiltrate editorial systems. So, but one of our important AI principles is that human oversight, we can also have that review of what’s going on and use that patterns and things that we can see. So again, as mentioned before, that augmented partnership, as what David mentioned, that thinking partner for part of the publishing process, to make sure that we can spot those kind of pitfalls and peaks and trots and see what’s happening in the data and also then validate that as well.

Garth Sundem

And do you think so if AI is used for things like plagiarism detection or to make sure that the submitters actually exist as real people in the real world, or say it’s looking for mistakes in the statistics used in a paper. Can it also find some of these issues that have come up in past years of data fabrication as well? Is AI a quality check on some of these processes that are used in studies. What do you think, Dave,

Dave Flanagan 16:07

I think you can use it as a consistency check. If a scientist wants to attack a journal with a with fragile research, it can. It’s notoriously difficult to pick it up at the highest level, but there’s a lot of things that we can do to help researchers to identify potential issues in their manuscript that maybe they want. Would rather have a computer spot it before a person spots it. I think that not having some kind of Gen AI agent checking your paper. You call it using like a reviewer before, I think of it as you know Famous reviewer number two, you want to have your own reviewer number two going through your paper and helping you to identify potential holes or potential errors or potential inconsistencies before you send it into a journal and let a you know a person read it. And the other benefit of that is also on leveling the playing field internationally. English is the lingua franca for scientific communication. And think about all the good papers that in the past, maybe haven’t been perceived as well as they could have been because, you know, it was difficult to read and it was a little bit harder to get the information out. You know, we have the chance to make the literature and the people publishing in the literature a bit more diverse by giving these tools to make, you know, their output more readable?

Garth Sundem

Yeah. You know, it’s interesting. I told this story at one of the MAPS meetings, but I have a friend who is an editor of a psychology journal. He’s a researcher. He does a lot with fMRI. I’m not exactly sure what he does, but he says, Oh, I can spot the submissions that now that are coming in with Gen AI support. And I said, Oh, I’m so sorry. Isn’t that horrible? And he said, No, it’s, it’s the opposite. He loves it. You know, the research is still there, there’s the results are still there’s the data is still theirs, and it’s just much better written than it may have been. And the Gen AI support, especially for English as a second language. Submitters, seems like a use that that pharma could use, but just one, one quick follow up, do you see mechanically people from within the pharma industry, the submitters, the researchers applying this AI review of their manuscripts before they come to you. Or is this something that AI review that you would apply as a manuscript comes in the door to you, or both?

Dave Flanagan

You think, I think, I think it’d be either you know, way before, during the authoring process, not right at the point of submission, because if you’ve, if you’ve ever written a paper with a bunch of co authors, you know that when you’re ready to submit, and you’ve got that agreement between all the co authors, and this is the final version, you don’t want feedback at that point saying, Hey, have you thought about rewriting your experimental section? So you know, how do we get earlier in the process, using whatever tools people are using today, or maybe using tools that have been specifically designed to help with that? I think earlier is better.

Garth Sundem

Okay, so thanks for that. Let’s talk about where we are going. We have some workflow AI in place right now. We have some quality check AI in place right now. Where are we going with AI in academic publishing, and honestly, Dave or Dean, whoever wants to start, where are we going in the future?

Dean Martin

So yes, I think for us, Industry Collaboration is going to be key. So we’re going to have to maintain that evidence quality and work alongside our contributors to content. So we’re working with various organizations, whether that be SCM associations or integrity hub or cope and united to act. It’s this kind of collaboration that’s going to help create a shared culture and ethical AI that we can use in medical publishing.

Garth Sundem 20:29

That’s a neat point. So AI is not something that must be developed in a silo for use in academic publishing, but in collaboration with these organizations to both meet their needs and also their their requirements. So the future is going to require collaboration. Dave, what do you think? Where are we going?

Dave Flanagan 20:52

I think if we’re looking, you know, five to 10 years out, and that’s obviously super risky for the AI engineering, I just look at the last couple of years, I think we might be entering a period of radical abundance in scientific research. And the reason why I say that is because even we’re starting to see tools now that can generate hypotheses and test hypotheses on their own. And right now it’s mainly for things that are, you know, provable, like software, code or mathematics. But as we start to connect those into the lab, we’re starting to we could get to a point where a computer can be guided to generate lots and lots and lots of not just evidence, but, you know, hypotheses and conclusions based on that evidence. And so the question would become, even, I think, would become even more important to have humans who have taste to determine what kind of research is interesting and important and deserves to be amplified and and shared. Overall, we’ll have a an abundance of content and trying to figure out how we can use all this computer times human generated knowledge, right? Because if AI can do everything correctly and everything efficiently, then it becomes, you know, not what we can show, not only what we can show is is valid, but what we deem is important.

Garth Sundem 22:39

Oh, interesting. Alright. Well, let’s leave it there for today, although there are about 50 zillion more things to dig into on this topic. But Dean and David, thank you for joining us to learn more about how your organization can partner with Wiley, visit wiley.com. MAPS members. Don’t forget to subscribe, and we hope you enjoyed this episode of the Medical Affairs Professional Society podcast series, Elevate.

MAPS LinkedIn

Elevate Podcast Channel

Popular
  • Murali Gopal Featured
    Why Good Science is Good Business: A Conversation with Murali...October 15, 2020 - 12:05 PM
  • The Role of AI in Academic Publishing: Opportunities and...June 3, 2025 - 2:47 PM
  • 58
    Value Based Contracting & Innovative Payor Engagement...April 20, 2018 - 10:44 AM
  • 59
    Best Practices for Managing the Life Cycle of an Investigator-Initiated...May 4, 2018 - 10:44 AM
  • 8
    Now More Than Ever, HEOR Plays a Central Role in Forging...June 24, 2018 - 10:51 AM
  • 47
    Navigating Career Transitions: How to Maximize Your Impact...June 29, 2018 - 10:41 AM

Connect with Us

602 Park Point Drive, Suite 225, Golden, CO 80401 – +1 303.495.2073

© 2025 Medical Affairs Professional Society (MAPS). All Rights Reserved Worldwide.

Follow Us
  • Link to X
  • Link to LinkedIn
  • Link to Instagram
Subscribe to MAPS Newsletter
MAPS Program Support Services
Policies and Positions
Mastering Cross-Functional Collaboration for Launch Success
Scroll to top
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}