In this third podcast episode in collaboration with Open Health, MAPS speaks with Jessica Ingram, Managing Director of Learning and Development at Open Health, and two of her team: Briony Frost, Learning Design and Development Specialist, and Jessamy Lowe, Senior Account Handler. Joining Jess, Jessie, and Briony is Seema Haider. Seema is the former Health Economics and Outcomes Research Cluster Lead for the Upjohn Business Unit at Pfizer. She also holds a Visiting Professorship at Hebei University, China, is a Faculty Lecturer at IFAPP-King’s College, London, and is an International Advisor for ISPOR India Chapter. In this podcast, they will be offering insights into metrics and how to measure the impact of internal training. OPEN Health is a global, full-service medical communications agency offering Medical Affairs consultancy and content, publications, medical education and internal training.

MODERATOR: Jessica Ingram

PANELIST: Seema Haider

PANELIST: Briony Frost

PANELIST: Jessamy Lowe
Following is an automated transcription provided by otter.ai. Please excuse inaccuracies.
Garth Sundem
Welcome to this episode of the Medical Affairs professional society Podcast Series Elevate, gathering the voices of Medical Affairs thought leaders and stakeholders to explore current trends, define best practices and empower the Medical Affairs function. I’m your host Garth Sundem, communications director at maps. And today we’ll be speaking with Jessica Ingram, Managing Director of Learning and Development at Open Health and two of her team, Briony Frost, Learning Design and Development Specialist, and Jessamy Lowe, Senior Account Director. Open House is a global, full-service medical communications agency offering Medical Affairs consultancy and content, publications, medical education and internal training. Joining Jess, Jessie and Briony is Seema Haider. Seema is the former Health Economics and Outcomes Research Cluster Lead for the Upjohn Business Unit at Pfizer. She holds a visiting professorship at Hebei University China, and is a faculty lecturer at IFAPP King’s College London and an International Advisor for ISPOR India chapter. In this podcast, they will be talking about metrics and how to measure the impact of internal training. Jess, Jessie, Briony, Seema, welcome! Let’s start with you, Jess.
Jess Ingram
Thanks, Garth. It’s great to be back for this final episode in the Open Health podcast miniseries on how internal training can support patient outcomes. In our previous two podcasts, we’ve discussed two crucial factors in designing effective and efficient internal training. Firstly, we looked at how to position the patient as the ultimate priority in training. And then we looked at how to engage, motivate and support staff to engage with internal training. In our final session, today, we’re going to talk about metrics. This is a really hot topic because we all want to demonstrate return on investment. But we also want to make sure we’re measuring what really matters. In other words, is our training really delivering on that ultimate goal of improving patient outcomes. We also have a great opportunity at the moment with a switch to more virtual interaction to increase the amount of metrics we can gather. But we need to be really careful that we don’t drown in data to the extent that we learn nothing. So today, we’re going to discuss which metrics are useful and some practical advice on how we can use them to measure the impact of our training programs. Jessie, could you give us some insights into which metrics we may want to measure and why?
Jessie Lowe
Absolutely, Thanks, Jess. So I think the first thing really would be to clarify what we mean by metrics. And I always see them as quantifiable measures, which are used to track and assess the status of a specific process. Now, things like these are no use unless they can actually be measured to show if success has been achieved in a learning program. For example, say you want to improve staff performances, it’s very hard to measure unless you specify how you’re going to quantify that improvement. For example, by saying you want to increase call rates by 25% is a much better measure than just saying that you want to improve performance. And we always will also want to link these back to the learning outcomes that we have laid out at the beginning of a program. Actual Briony can chip in here.
Briony Frost
Yes, I’m the one that is forever going on about making sure we get our learning outcomes, right. And we have to get them right at the start of the training, because they need to be designed in a way that makes them measurable, you have to be using the right language and thinking about what you want your learners to be able to accomplish by the end of the training, in order to really establish what you want to measure both through your assessment and through your data gathering. And that’s that’s really the only way you can work out whether your learner’s are getting where they need to be and obtain good quality metrics, too.
Jessie Lowe
Thanks, Briony. And yeah, so the three key metrics that we really focus on during our training programs are Open Health, our knowledge, competencies, and confidence. And it’s the combination of these three that we really see show results in enhanced performance. This means that our learners find not that they have not only a clear understanding of the science and the latest data, but they’re also able to apply that knowledge to clinical practice and patient needs.
Seema Haider
So So, you know, metrics are intrinsically essential to any training initiatives. And this is because metrics help us to obtain insights into the impact the training is having on aspects such as relevance, such as understanding and knowledge of the content, confidence gaps, and also very importantly, what needs to be improved or changed for future trainings. So and also, you know, whether it’s worth it for the organization to invest in the area under discussion.
Jess Ingram
I completely agree. And I think it’s really important that we’ve already started unpacking there, that actually we need metrics to measure different things. So we mentioned knowledge, competencies and confidence. And so I guess the next question is really how do we go about measuring those different things? What tools and processes do we use?
Jessie Lowe
So there are a range of different ways that we can go about measuring these. And at the most basic level, there are activity and engagement metrics, things like course completion, how many people attended a particular event, online session, the time spent on an online course the time to complete it. And participation in in learning activities and formative assessments is a stage up from that there is then knowledge and understanding. So really looking at what learners have gathered, from a learning program. And this can be done through things like pre and post training assessments, it’s a great way to assess a change in knowledge and understanding to have an assessment asking the same questions both before and after a training session. As well as this there is then the application of this knowledge confidently. And we find this as really effectively assessed through things like scenario based activities and role plays. But what’s also worth mentioning is how all this data is, is brought together and displayed. We have a dashboard that we tend to set up when we’re looking at specific training programs, where we bring together all these different measures. And that really creates a wonderfully holistic view, both of the current status and trends over time. And then finally, we also have to consider qualitative metrics. So well and good having lovely data. But if you’re not getting down to the detail of what individual learners actually thought of the programs, and getting that personalized feedback, and that can be a real boost as well. So we do this through forms and surveys, mainly.
Seema Haider
Yeah. So as we move forward, you know, we we begin thinking about impact. And there’s a variety of techniques that can be used to measure impact. So as Jessie mentioned, there are before and after assessments, which can be on knowledge or direct questions on value of training, and interest in future trainings. In terms of measuring things like confidence, it’s important, where possible to have access to platforms during training to ask, for instance, rolling questions. So your questions are coming in as you’re training. And you have the option of answering those as they’re coming in. And also, you know, to implement these polling questions to measure knowledge, competencies and confidence. So I’ll give you an example. I have experience and training study coordinators and medical service liaisons on HEOR data and quality of life questionnaires. And here, what we’ve seen is that as we move deeper and deeper into the training, so as the knowledge on the subject increased confidence of our participants also increased. And what we saw was that we saw a number of participants, answering polling questions go up, and the number of rolling questions that were coming in started to go up as well.
Briony Frost
I’m just going to jump in here and pick up two points actually from Seema and from Jessie. So firstly, the kind of rolling questions that seniors just talked about are great because they make a double effort. They work as what’s kind of called a kind of formative feedback, giving their facilitator and the learners a chance to check their knowledge, understanding and confidence as the course progresses, which means that trainers and learning peers can supplement materials or signpost back and forward to other aspects of the course, to shore up learning development as it happens. And then, of course, it acts as interim measures of knowledge, competencies and confidence. When you review your training provision, it gives you a sense of what the final metrics might look like. I think the other thing that’s worth pointing out is we need to be very careful around the space with engagement metrics, they can be really, really useful, but that also avails have a potential to be exclusionary. For instance, you don’t want to make it difficult for people to engage in training in ways that could prevent them achieving success. So for instance, you, you might want to have an approximate minimum time to complete a course so that individuals don’t just skip to the end. But you don’t really want to have a maximum time to complete the course in unless there’s a time dependent element assessing a time dependent skill. Because in that scenario, in the scenario where you’ve got a maximum time, but no need for it, all you’re really doing is risking penalizing individuals who have things like dependent family members, they might be neurodiverse themselves, they might use assistive technologies or have sight Hearing or motor impairments, they might be working in a second, third, fourth, or even more language, or simply trying to work around a dodgy internet connection. And none of these things reflect on their abilities, but it may affect the time it takes them to complete the task. Likewise, participation metrics shouldn’t be relied on too heavily, or should be taken across a whole program, rather than, say, just attendance or speaking up and for the same reasons as before. So when we’re going to employ metrics like this, we need to use them really carefully, taking more than one at a time comparing them and contrasting them, and reading them in the context of the quality of feedback and reflecting on the ways they can help us improve our programs, rather than just as measures of whether staff have done their due diligence with training. And we also need to keep this in mind when we’re measuring confidence, because people can lack confidence for reasons that are actually outside the remit of the training program. And they may need support in other ways, things that go beyond knowledge and competencies development, which is why it’s so important, as Jesse said, to get that qualitative feedback for context when measuring learning and development training success.
Jess Ingram
Thanks, brainy. And you mentioned that one of the potential challenges of measuring confidence and certainly isn’t that we think that a lot that you know, it’s quite ethereal thing, is it and it can be quite difficult to measure, or at least sometimes I think we think it is. So what do we think is a group can we really get an accurate measure of confidence?
Jessie Lowe
It’s definitely not an easy task. I think I mean, as you’ve both said, it’s something that we need to acknowledge is problematic, because it’s often self assessed. And so you need to use qualitative measures alongside quantitative measures, to then get enough information about what’s changed and why it’s here.
Briony Frost
And there are things you can do to manage quantitative results to check the significance of change pre and post training. And none of these scenarios are ideal, but they can certainly help and try to get more workable results. So the paired sample t test is quite good here. If anyone’s aren’t familiar with it, it’s a parametric test, which compares two means that are from the same individual, in this case, self assessment values taken pre and post training, to see whether there is statistical evidence that there’s a difference between two observations on a particular outcome here confidence, and if it’s significantly different from zero. We need to be aware though, that when we’re dealing with confidence, we’re not necessarily looking for a straightforward upward trajectory of confidence in either an individual or cohort, we need to keep in mind the Dunning Kruger effect, you’re always going to get individuals and groups who cannot objectively assess their own competence or incompetence. When you’re dealing with people who are potentially overestimating their own competence and confidence at the start. If they come out of a training with a noted decline in competence and confidence, then actually, that’s a really positive outcome, as they’ve been forced to reevaluate their abilities as lower than they previously understood. As Jesse mentioned, quality of data is really powerful in this space, because it a contextualized as any dip or rise in confidence the training provider and be for the learners, you’re also encouraging them to reflect on why their confidence assessment has changed how the training has affected this and what this means for their future development. And hopefully, it means they’ll be more open to training in the future. The confidence metric really comes into its own in training and development, as long as you’re not using it in isolation. What you really want to be able to do is to triangulate between knowledge, competencies and confidence to get a comprehensive picture of how your learners as both individuals and cohorts have developed during the training.
Seema Haider
Yeah, so I agree with Briony and with the Jessie, you know that measuring confidence can be challenging. But we have to keep in mind that it’s still an important metric that changes in confidence can lead to changes in performance in areas like calculated risk taking interactions credibility. And also in determining which groups or functions uptake of the training was better or more challenging than the other. So changes in confidence can be measured indirectly as well. For example, we can take a look at things like requests that are coming in for more training requests by new functional groups for training groups outside of Medical Affairs, like Corporate affairs or other study teams. Requests for increased budgets for training seats at the table for learning and development initiatives such as new or updated standard operating procedures, and trainings being rolled out more widely across the organization, more trainings being provided, or provided to a much wider audience. And then finally, also, you know, recognition, for for, for the job that’s being done. So this is all evidence for increased confidence as a result of effective, efficient and internal training programs.
Jess Ingram
Thanks to a great ideas. And I really love that picture of looking across different measures, and building up a more complete picture. And I think that’s the key is collecting data wherever you can, so that you can start understanding not just what’s happening, but why. And as I said, I think the key is then to get the balance, right. So we don’t get overloaded and almost kind of paralyzed with our metrics. But for me, the key to addressing that is staying focused on the true objectives of our training, and designing in those metrics right at the beginning of the program. So I’m conscious of time. So let’s finish off with the $64,000 question. Do we think it’s possible to measure the direct impact of internal training on patient outcomes?
Jessie Lowe
That really is the big question, isn’t it? direct impact? I would say no. But indirectly, certainly, internal training really enables staff to better explain, analyze and discuss the clinical landscape products, devices, and lifestyle management strategies. It also improves their knowledge of the totality of the relevant disease, and improves their ability to communicate succinctly and clearly with HTTPS, which then enables the hcps themselves to make the right choices for their patients.
Briony Frost
So as Jesse said, what we we know about internal training is if you design it really well, then you are always creating the potential for learners if they’ve met the learning outcomes successfully, to be better at the job that they do. And we know that the roles that the Medical Affairs professionals that we train, we’ve already discussed in previous podcasts, how they plug into the overall landscape that contributes ultimately to better patient outcomes. from a theoretical perspective, as well, if I pick up the idea, again, of measures of engagement, we can use this as an indication that learners are seeking to improve their performance. And they obviously work with past marks and other metrics that we have already talked about, to really demonstrate whether a training program has improved their professional practice. And we also know that instructional design approaches means that we’re more likely to be designing programs with clearer goals, more applied learning activities, making sure that learners are developing knowledge, competencies and confidence together. So they’re not trying to bridge the gaps that are created by training them separately. And that produces a more consistent performance within and across teams who’ve taken undertaken the same training. We can also make sure that in our training, we create reusable resources, so that we’re helping provide teams who undertake the training with better tools for working with their hcps, better technologies and better information. And again, all of these aspects contribute to that wider picture of how we help to ultimately improve patient outcomes.
Seema Haider
Yeah, so I also don’t think that you can directly measure patient impact. But you know, there’s lots of other ways indirect ways of measuring patient impact by things like increased knowledge, capabilities, and confidence, and better staff performance. You know, you can have people that are better positioned in terms of their capabilities and confidence, and better informed in terms of their knowledge of both the depth and breadth of new treatment, diseases and outcomes, to interact with stakeholders who prescribe, you know, those who pay and those who manage treatment or well being, and this, you know, obviously leading to more meaningful interactions with these stakeholders, who can then in turn impact patient outcomes. You can, you know, you can also have people that are more equipped to develop relevant tools that can then be used impact knowledge of stakeholders, who then once again, in turn, you know, impact patients. So I’ll give you an example, we held multiple trainings for all functions within the organization. And across the globe, we were working on an antibiotic. And we wanted everyone to have similar knowledge and insight in order to get buy in, and budgets needed to conduct extensive studies and other initiatives, looking at the impact of this antibiotic on patient outcomes and the cost of treatment. So we held this training across the globe. And the training really helped inform and to anchor the team in committing to a plan together. Because everybody was on the same page after this, you know, after this training, so it’s critical to have very good training programs with relevant metrics that can anchor a team together in making commitments and budgets and resources and so on. And you know, at the end of the day, the goal is really to impact colleagues who impact stakeholders, and who in turn impact patients.
Jess Ingram
Great, thank you Seema, and Briony and Jessie, I think we’ve we’ve made some really important points there and a really clear call to action to use metrics wisely, but not be put off by the challenges that are out there in this area. So try to wrap up by by thinking about our kind of take home message from today in our previous podcasts together. And really, for me, that is that internal training absolutely supports better patient outcomes. It’s an absolutely crucial contributor to the complex web of interactions, information and the performance of the individuals that make up our teams. So if your internal training program lives and breathes the patient experience and keeps its eyes locked on that ultimate goal of improving patient outcomes, it’s so much easier for your team to embody that vision in their day to day activities. Garth, thank you so much for hosting us. I hope everyone’s found that useful. Back to you.
Garth Sundem
Thanks, Jess, Briony Jesse and Seema. To learn more about how your organization can partner with Open Health to drive positive change in healthcare, communications and market access. Visit OpenHealthGroup.com. MAPS members continue the conversation at our community portal. And don’t forget to tune in on December 10. When Open Health we’ll be back with a webinar that takes a fresh look at the topics they’ve covered in this podcast series. We hope you’ve enjoyed this episode of the Medical Affairs Professional Society podcast series, Elevate.