You’re not alone if a strong publication didn’t land. More often than not, the problem isn’t the paper – it’s the data behind your target selection. In this episode, scientific communication expert Mike Cashman (Mavens) and data expert Mike Taylor (Altmetric) share how publications teams are using smarter data to guide journal selection and improve impact.
Learn how real-world signals like citations, clinical uptake, and policy mentions from Dimensions and Altmetric help teams prioritize journals, plan effectively, and track downstream influence. You’ll also hear how this data gets embedded directly into planning workflows, so teams can make confident decisions right where they work.
This is a practical, clear look at how to simplify complex planning, cut through the noise, and make smarter, faster decisions. Whether you’re refining strategy or proving value post-publication, it’s time your tools work as hard as you do.
Moderator: Mike Cashman
Speaker: Mike Taylor
Following is an automated transcription provided by otter.ai. Please excuse inaccuracies.
Garth Sundem 00:00
Garth, welcome to this episode of the Medical Affairs Professional Society podcast series, Elevate. I’m your host. Garth Sundem, and today we’re talking about data driven publications with Mike Cashman, Senior Director Maven Scientific Publications Cloud, and Mike Taylor, Head of Data Insights at Altmetric. This episode is sponsored by Mavens, a Komodo Health company. Okay, so Mike Taylor and I guess we’ll use full names for this, because we have Mike and Mike here. Let’s talk about what data is out there to help with both publications planning and publications impact measurement, when you look out at the data landscape. Mike Taylor, what are you excited about?
Mike Taylor 00:51
I’m excited about making sense of data. Okay, right? I’ve been working with data for a quarter of a century. I hate to admit it, but yeah, it’s been a quarter of a century longer, really. And through most of that time, I get really familiar with data. You kind of get to know you swim in a sea. You get to know what the sea is like. You get to know the fish. When you’re working in an area that hasn’t had that familiarity with data. It’s kind of blinding to suddenly be overwhelmed with all of these data points. There’s too much. It becomes really, really hard, I think, for people to make decisions when they’ve got so many data points in front of them. It’s a little bit like, you know, somebody waving an Excel spreadsheet under your face and saying, which of these data are good? Can I ask you waving your finger at me?
Garth Sundem 01:46
Yeah, no, I so. Are publications teams not used to being data driven? Decision makers just catching up?
Mike Taylor 01:54
Well, I mean, historically, if we go back, wow. I mean in the UK, academic publishing has been much more familiar with numbers since a thing called like, there are a number of different publications that go back, you know, 15 years, and these are the ones that were saying, you know, we should be taking metrics seriously. This is like an academic sense, right? But it changed the way that we start thinking about things, and that familiarity with data has been kind of percolating through the world for various reasons, organizations like pharmaceuticals, where things like reputation are really, really important, We’re less familiar with numbers. So, so growing culture is what I’m saying.
Garth Sundem 02:43
Yeah, pub, pubs teams are catching up to data. You’ve been swimming in the sea of data for long enough to have some perspective on what is out there and how pubs teams can use it. So I’m sorry I interrupted. Keep keep going
Mike Taylor 02:57
No, no, that analogy makes me want to go and have a shower, though? Yeah, it does. So, I mean, I think one of the things that’s kind of very often brought to bear when you move from sort of a more qualitative driven environment, more reputational environment, to one where you have to be thinking about the numbers, is that it? It’s quite disconcerting, right? It doesn’t have the same kind of human qualities that, you know, ideas of reputation and authority and so on have. So I think there’s a difficult there’s a challenge for anyone in that world to change the way they think about it, to become familiar with that, with the numbers, become comfortable with them, and to use them in a day to day manner, rather than see them as almost like a psychological obstacle to thinking about decision making. That’s kind of where I am, right. Yeah, no, we should talk about what the data actually means, because when we start thinking about what data means, you know, we know stuff about citations. We know what a citation means. We probably have a rough idea about what a citation or what 1000 citations means for a journal article. But there are so many other ways of looking at things. You know, we’re seeing things like clinical guideline citations coming up, and you know, those are going to be very few and far between. But the question there is, well, how quickly did they emerge? It’s a different way of understanding the same kind of data, things like, who’s talking about our research and Garth you and I were talking about Blue Sky earlier this year and how it was increasingly becoming the dominant platform for for scholarly, um, scholarly communications. What does that mean? Who’s who’s using it? Am I just looking like at a pile of 12,000 tweets, or does that represent something meaningful for me as a medical affairs professional?
Garth Sundem 04:56
That’s interesting. So let me, let me make sure I get what you’re saying. It used to be. That you get a pub in NEJM and everyone gives a high five, and that was reputational, and that was as far as it went. And now we’re saying, you know, that may be a valid path towards impact, having a publication Ng, but it’s not the impact itself.
Mike Taylor 05:22
No, we saw, we literally saw this in academia where, like, they wouldn’t actually count publications if they weren’t in the top tier journals. Right? So if you’re an academic, and this is only going back 10 years, maybe 15 years, but if you didn’t publish in n jam, if you weren’t publishing in cell, if you weren’t publishing in no physics review, it just didn’t count, right? And they had, like, no threshold. It was like a binary thing, yes or no. We’re getting from position where applications really, really matter, understanding what the second tier journal or the difference between how open access forms versus non open access, or where the plus delivers the kind of impact that you need to see. You know, these are really big questions that we can now start answering, but they’re they’re human. Question is not just like meaningless numbers.
Garth Sundem 06:15
Okay? So. Mike Taylor, sees the fish, uh. Mike Cashman, how do we catch the fish? Mike, how can Medical Affairs teams use this? And one quick thing, wait a minute, are med affairs teams publishing in what was it? Physics review, what in the world are they?
Mike Taylor 06:31
No, no, I was talking about academia. Okay, I’m talking about academia. Okay.
Garth Sundem 06:36
So. Mike Cashman, okay. How do we use this? We have all this data. Medical Affairs teams know that we’re supposed to be doing something with it. We can what see a dashboard, and now we know,
Mike Cashman 06:49
I think one might be bringing Mike Taylor to help you fish, because I don’t know how you cannot have fun with this, with the passion he brings, and engage on that. But I think he and I have talked about that. You know, part of the challenge is not just finding the data, getting the data, but it’s getting people to use it as part of their regular day to day lives, part of their regular workflow. And I think, to your point, if it’s just in dashboards and reports, the only people that are going to act on the data are already the true believers, right? They’re already the mike Taylor’s that feel passionate about it, understand the data, and you’re missing out on the adoption across the broader team. So I think one of the solves that we’ve looked at together is embedding the data in day to day workflows, right in software, so folks are getting the right data right time for embedded decision support, and that really bolsters adoption. One of the things that we’ve we’ve looked at, specifically there, is providing side by side comparison of journals at the time of target selection, so you’re getting visual representations of metrics throughout metric and through dimensions. So citation uptake reach, and you’re having that visual side by side comparison when it comes time to make a decision. And the the other thing I’ll just add there is, I do think context matters, right? So I think in software, we often think about one outcome, one path to that outcome. And in conversation with Mike and with our partners, we’ve recognized that this is really an area where two paths makes a lot of sense, right? So if I’m making a decision about, say, a Congress a directional decision, and time and geography are the variables I really care about, or the constraints I really care about that needs to be efficient. But if I want to take the scenic, or if I’ve I’m willing to take time and weigh variables like policy uptake, Altmetric score, open access level, and do a more robust analysis of journal selection, we’ve provided a separate path for that. So I think that context is really important to driving adoption forward. Okay, so the way we
Garth Sundem 09:07
place studies at a congress is different than the way we place studies in a journal. Again, here’s another background from my understanding. Are those the two things that pubs teams do are journals and congresses, or do their outlets go beyond those two? And thus does our conversation about data need to take into account more than congresses and journals? What do you think? Mike Cashman, either one?
Mike Taylor 09:42
Yeah, I can talk a little bit about it, because I think things like podcasts, like the one we’re on that right now, videos and those other kinds of softer if you like, those other softer publications, I think all of these will eventually be. Incorporated into a more data idea. Now, potentially, there’s a huge proliferation of those things being generated by AI that’s going to come at a cost, and we’re going to need to know whether people are using them and what messages are taking away with them. I mean, just forgetting the whole of the, the the I the the framework for approval these things. But I do think that as people experiment and plain language summaries is another, as people use more of these kind of assets in their professional lives to communicate our data to an external world. So I really think that we need to be thinking, are these working? Not only are they do, they contain a message. Is that message being read and actioned?
Garth Sundem 10:50
Okay, that’s a bold statement that. Well, let me see if you’re actually making this statement. Are you saying that pubs teams are going to take on podcasts? Mike Taylor, that seems like a med comms thing.
Mike Taylor 11:04
I think we’re going to see break a breaking down of I think we’re going to see an increasing breaking down. A lot of folk I talk to are in smaller organizations, so they naturally have those, you know, weaker barriers, if you like, okay, but you know, when we’re talking about communications, we often talk about publications, and given the role of the communications they play, that’s a very different thing from whether a piece of whether a research article goes into and jam or frontiers and medicine or whatever. But when we start thinking about a pod being a version of that same data, or a plain language and spoken plain language summary of that data, you can see how those start breaking down. And I think it behooves us as people who work in professional to who work in publications, to be thinking about, where are those lines? How does the future look? You know, if we plot a, if we have had an idea of the how, how we, how we talk about trial data in the future, what would that look like? It’s going to be looking much more like a range of outputs that are all based on that data. They’ll all have to go through the same kinds of approval processes. But, yeah, ultimately, which platforms work, you know, which, where are we getting the audience? Are people listening to it?
Garth Sundem 12:36
You know, it’s interesting. Mike, if, if the goal, the goal used to be journal publication right and high impact factor. And if that was your success metric, then you had to be in a journal. But if your impact metric is clinical practice, or if your impact metric is somewhat independent of a journal, then a publications team might also be thinking beyond the journal, but my cash, let’s get back to some implementation. So one of the things that we brought up here is is AI, what I’m wondering is, if you’re a publications team manager, are you saying to your team that at the point of journal selection, you should pull up this cool journal comparison tool, and that’s going to help influence you know where, where you’re submitting or or are Our tools now sophisticated enough that it’s guiding the process and telling us when we should be using it, yeah, how do we trigger when to use data driven decision making?
Mike Cashman 13:52
So I think it’s, it’s yes and yes, and that’s an evolving answer, right? I mean, we couldn’t end the podcast without talking about AI, but I think it is actually really a good time, not again. People feel like they’re we’re exiting the hype phase, because even in their lives, outside of work, right? You’re seeing the practical benefits with things like chat, GBT and each new model, you feel and experience the difference. So I think we’re exiting that hype phase and starting to see practical benefit. And this is real stuff. I mean, this is stuff that with partners, we’re starting to pilot and playing with in sandboxes. And so what I think it represents is really the next iteration of what we’ve talked about, right? So for a long time, it was, get the data, get it into reports and dashboards, and then you’ve got to do the work of actually finding it and acting upon it. Our last iteration, as we’ve mentioned, has really been embedded decision support, so embedding that that data at the point of decision. And I think what we’re looking at with with AI now, you know, it’s kind of like a you. Background assistant that can be predictive and even prescriptive. So it’s that iteration towards the proactive element of the tool, and so I think we’ll see things like making suggestions or providing options at time of decision based on content, on metadata, on prioritization of certain variables that are still can be human tweaked. And then you’ve got human intervention, of course, at the end to make the decision, but you’re getting those prompts and those suggestions from Ai tooling, and then you can have the benefit of taking a look after the fact and say, Okay, what was suggested? What did we ultimately go with? And how did things shake out for
Garth Sundem 15:41
us? That’s so interesting. I Mike Taylor, I think a lot of our conversations have been about the data, how to get the data, and what the data tells us. And it seems like we may be talking about a step beyond that, where, you know, we’ve had the data, and now we have tools that can tell us what it means and imply how we might act on that. That seems like one step further to me. Does that seem like a step further to you? Mike Taylor, I think
Mike Taylor 16:13
it’s a little bit like a line dance. Garth, we’re going to be taking a step forward and a step back, and we’re going to be doing this for a while.
Garth Sundem 16:20
I was talking nothing to do with fishing. Yeah,
Mike Taylor 16:23
I’ve gone to the barn now, yeah, may or may not know, but I own some cows as well. So okay, by that another day. So, so the other day, I was talking with someone whose name I am not going to mention, and we were talking about using AI to summarize impact. So in other words, you give it a bunch of numbers and it gives you an answer back. And she said, Well, that’s great, but I need to be able to click on this and get an explanation of those numbers and see the numbers and be able to drill into them. That’s something which, AI, dare I say it struggles to explain. No, if it says, this is a better thing to do than that thing, and you say, explain it, and where’s the evidence of where the citation is? Now we’re not, we’re not there yet, but that’s assuming that humans are almost less engaged in the loop, but have have less of a critical faculty, because it’s putting some of the some of the onus on humans to interpret it. So right now, if an AI says to you, makes a declarative statement about the role of publication, the role of a journal. You would you need to query it right in our professional lives, we have to be able to look at that, look at what it’s saying. Understand why query it, justify it. Establish that as being a fact. It’s sometimes, I would argue that that appearance of wisdom is more time consuming than acquiring the wisdom itself in a in an old sort of way. So I think we have to be very careful about how we use AI in that space and what we use it for, and at the risk of sounding like a horrendous old Luddite, which I am not, AI is like one technology. There’s stuff like the data science folk in your organizations who are really good at doing things like machine learning, which, you know seems really old school these days, but actually is a really robust technology for making, for example, if, then if, or decisions, right? So one of the really key things is, okay, so I might be able to get this into N jam, and it’s going to be a huge investment of time, and it’s going to be the only thing I do this year, or I can get it into a slightly lower journal, and I’d be able to do something else instead. Those are things which may be done using better using other technologies. And we should be open minded
Garth Sundem 19:06
to that. Mike Cashman is the question back to workflow. Is the question where to put the human in the publications workflow? And is that changing?
Mike Cashman 19:20
I think the question is probably right now, where to put the AI actually, and we know that, particularly in a regulated environment, the human is not leaving the workflow anytime soon. And so really it’s not thinking of AI as a thing in its own right, right? It’s thinking about what use cases do we want to apply AI to? And that’s that’s evolving over time, right? I think, for example, plain language summarization is an area we’ve looked at that’s that’s different than journal selection, and they’re going to use different AI capabilities. And so there’s a variety of. Cases that we’re considering. And I think part of the design element is, one, testing the credibility and the fidelity of the AI, and then two, deciding, does that actually make my life easier? Where do I insert the human? Because at least for the time being, the human’s not going away.
Garth Sundem 20:19
Okay, cool. Well, we are out of time today. It’s Wow. I know so soon it’s so interesting that, you know, it used to be the publications teams didn’t really use data beyond the Impact Factor. Then they came to the point where they knew that they needed to use data. They found the data. They now could build representations of the data that could be interpreted by humans. We’re moving towards interpretation a little bit by AI and I would be interested at some point in the future to talk about the human role in publications as we move forward with more use of AI, let’s leave it there for today. Mike Cashman and Mike Taylor, thanks for joining us. MAPS members, don’t forget to subscribe, and we hope you enjoyed this episode of the Medical Affairs Professional Society podcast series, Elevate.