Drug Safety Matters

Uppsala Reports Long Reads – Ensuring trust in AI/ML when used in pharmacovigilance

June 27, 2024 Uppsala Monitoring Centre
Uppsala Reports Long Reads – Ensuring trust in AI/ML when used in pharmacovigilance
Drug Safety Matters
More Info
Drug Safety Matters
Uppsala Reports Long Reads – Ensuring trust in AI/ML when used in pharmacovigilance
Jun 27, 2024
Uppsala Monitoring Centre

Ensuring trust in AI is vital to fully reap the benefits of the technology in pharmacovigilance. Yet, how do we do so while grappling with its ever-growing complexity?

This episode is part of the Uppsala Reports Long Reads series – the most topical stories from UMC’s pharmacovigilance news site, brought to you in audio format. Find the original article here.

After the read, we speak to one of the authors of the article, Michael Glaser, to learn more about how AI and ML has been used in pharmacovigilance so far, and what needs to happen to ensure its continued use in the field.

Tune in to find out:

●      How AI and ML are being used today in pharmacovigilance processes

●     Why a mindset change is necessary to make full use of AI/ML in pharmacovigilance

●      How we may best move forward to implement AI/ML into healthcare.

 Want to know more?

 To know more about how AI and ML are being used in pharmacovigilance currently, read this scoping review.

To know more about future trends of the use of AI in Biopharma, read this Accenture survey.

  • Despite there being major interest in ML and AI to do more than task automation, there are a number of barriers to its implementation in healthcare. Check out this future-focused paper on the use of AI/ML in pharmacovigilance that details how to utilise it to its fullest potential.
  • A mindset shift is necessary in terms of how we think about data, in terms of sharing, how to generate data required to effectively train AI/ML models.
  • A validation framework must be developed for AI-based pharmacovigilance systems. One suggestion is to do so using a risk-based approach.
  • While there is much interest in using recently developed AI technologies such as chatGPT, preliminary studies like this one suggest that the technology has a ways to go to be useful in pharmacovigilance.
  • The World Health Organization have published an extensive guideline on the ethics and governance of AI for health.

Join the conversation on social media
Follow us on X, LinkedIn, or Facebook and share your thoughts about the show with the hashtag #DrugSafetyMatters.

Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!

About UMC
Read more about Uppsala Monitoring Centre and how we work to advance medicines safety.

Show Notes Transcript Chapter Markers

Ensuring trust in AI is vital to fully reap the benefits of the technology in pharmacovigilance. Yet, how do we do so while grappling with its ever-growing complexity?

This episode is part of the Uppsala Reports Long Reads series – the most topical stories from UMC’s pharmacovigilance news site, brought to you in audio format. Find the original article here.

After the read, we speak to one of the authors of the article, Michael Glaser, to learn more about how AI and ML has been used in pharmacovigilance so far, and what needs to happen to ensure its continued use in the field.

Tune in to find out:

●      How AI and ML are being used today in pharmacovigilance processes

●     Why a mindset change is necessary to make full use of AI/ML in pharmacovigilance

●      How we may best move forward to implement AI/ML into healthcare.

 Want to know more?

 To know more about how AI and ML are being used in pharmacovigilance currently, read this scoping review.

To know more about future trends of the use of AI in Biopharma, read this Accenture survey.

  • Despite there being major interest in ML and AI to do more than task automation, there are a number of barriers to its implementation in healthcare. Check out this future-focused paper on the use of AI/ML in pharmacovigilance that details how to utilise it to its fullest potential.
  • A mindset shift is necessary in terms of how we think about data, in terms of sharing, how to generate data required to effectively train AI/ML models.
  • A validation framework must be developed for AI-based pharmacovigilance systems. One suggestion is to do so using a risk-based approach.
  • While there is much interest in using recently developed AI technologies such as chatGPT, preliminary studies like this one suggest that the technology has a ways to go to be useful in pharmacovigilance.
  • The World Health Organization have published an extensive guideline on the ethics and governance of AI for health.

Join the conversation on social media
Follow us on X, LinkedIn, or Facebook and share your thoughts about the show with the hashtag #DrugSafetyMatters.

Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!

About UMC
Read more about Uppsala Monitoring Centre and how we work to advance medicines safety.

Alexandra Coutinho:

Ensuring trust in AI is paramount to fully reap the benefits of the technology in pharmacovigilance. Yet how do we do so while grappling with this ever-growing complexity? My name is Alexandra Coutinho and this is Drug Safety Matters, a podcast by Uppsala Monitoring Centre, where we explore current issues in pharmacovigilance and patient safety. This episode is part of the Uppsala Reports long read series, where we select the most topical stories from our news site, U ppsala Reports, and bring them to you in audio format. Today's article is how do we ensure trust in artificial intelligence and machine learning when used in pharmacovigilance, written by Michael Glaser, R ory Littlebury, and Andrew Bate and published online in February 2024. After the read, I sit down with one of the authors, M ichael, to learn more about the challenges in implementing artificial intelligence in pharmacovigilance practice. So make sure you stay tuned till the end. But first let's hear the article. [The original article is available at: https://uppsalareports. org/articles/how-do-we-ensure-trust-in-aiml-when-used-in-pharmacovigilance/] Artificial intelligence and machine learning seems to be presented as the solution to absolutely every problem in our lives lately. How do we get from today to this projected future state? It is worth remembering that the routine use of R obotic P rocess A utomation, AI, and machine learning across multiple industries, including pharmacovigilance, is not new. Indeed, U ppsala Monitoring Centre has and continues to lead on pioneering machine learning work, dating back to the first routine use of machine learning in pharmacovigilance on duplicate report detection. There is always promise and excitement within pharmacovigilance to be able to leverage technological advances. Currently, technologies are being increasingly trusted to reduce the cost of current activities and to potentially improve how patient benefit- risk is assessed. It is important, however, to note that pharmacovigilance activities in a pharmaceutical company must adhere to a diverse set of legal regulations that dictate how activities are conducted across the entire pharmacovigilance cycle. These regulatory frameworks are themselves complex and made further complicated by the many variations that exist worldwide. Pharmaceutical company processes align around these diverse regulations to ensure the integrity of pharmacovigilance activities. It is these activities which are essential to ensuring patient safety, maintaining trust by the public in medicines and vaccines, and for maintaining trust in AI and machine learning tools by all stakeholders, where global harmonisation is important to reduce unnecessary efforts and maintain focus on tasks benefiting patient safety. It is accepted practice to examine incoming and outgoing data sets, examine the automation software, potentially through a line-by-line code review, monitor automation metrics and utilise risk management and control plans to identify potential areas for failure and resolutions. This robotic process, automation documentation set, along with the code itself, is made available for internal audit and external inspection. This approach essentially treats robotic process automation as a white box entity where every component of the technology is open and available for review, giving full transparency. Does such a management model fit given recent exciting advances with AI? Currently, one industry-wide struggle being grappled with in regulated areas is how to responsibly govern AI and machine learning software and algorithms. A recent Accenture industry survey found that 92% of biopharma executives note the need to have more systematic ways to manage emerging technologies responsibly. Should there be an expectation that lines of algorithm code are inspected or reviewed, as with robotic process automation technologies, or should we ignore the algorithm details and only inspect the results? Should every dataset used to validate an algorithm be retained as evidence? What role should pre-trained large language models or generative AI play within pharmacovigilance when the training datasets and algorithm specifics are unknown? How do we handle possible AI hallucinations and the variability of generated results? Some current thinking on AI and machine learning implementation appears to be headed in the same direction as for robotic process automation and other technologies. However, this may not be sustainable in the future. Already, the algorithm software and pre-trained models for these technologies are rapidly increasing in complexity. The ability to store ever-larger test datasets and results is becoming more difficult, and the move to federated networks like Sentinel in the US and Darwin in Europe, where data is available for analysis but not centralised, makes it difficult to retrieve that data. To ensure appropriate use of the technology, assessment of AI and machine learning outputs and related processes is arguably growing in importance, given the increasing complexity and the potential use of large language models. We suggest that ensuring trust in AI and machine learning technology can make use of existing risk-based pharmacovigilance processes as a framework. The future of trust in AI and machine learning could focus on monitoring the outcomes of a process for safety, reliability and effectiveness. Using a risk-based approach could help ensure that any change in or impact to business processes is fully understood and can be successfully managed. Moreover, it should involve all stakeholders working closely together to harmonise on a process for implementing and monitoring AI and machine learning. This way, responsible data access to all partners in the pharmacovigilance ecosystem will be assured. B by utilising a modified and simplified version of existing pharmacovigilance system frameworks and focusing on the safety, reliability and effectiveness of the output of AI and machine learning systems, rather than the data set and algorithm itself, w We may build trust in these technologies, much like we do for other existing technologies. Building trust ensures that when a pharmaceutical company takes a systematic approach to assess and monitor the quality of data inputs and outputs, with targeted spot checking that is proportional to risk, a pharmacovigilance organisation can successfully balance the goals of ensuring patient safety while making the most use of the advantages of an AI or machine learning system. This approach does not require access to algorithms or data sets. Ensuring trust, rather, can rely on a black box approach to AI and machine learning technologies that focuses more on outcome validation and access to high level metadata. Ensuring trust in AI and machine learning systems is one example of an area in which all participants must engage each other and constructively come to a set of agreements. Continuing the current white box mindset will stifle innovation and limit advances that may benefit patients, particularly as the volume of pharmacovigilance data and myriad of data sources continue to increase at a rapid pace. The conversation must result in a unified set of global regulations around AI and machine learning systems. A scenario like the disparity of adverse event reporting rules between countries must not be allowed to propagate into their regulation. If global regulations remain diverse and unaligned, delays or increased risk may result. This will present a barrier to pharmaceutical companies utilising, implementing, and reaping the benefits from new technological advances. Studies have shown that AI and machine learning innovations are starting to make progress within pharmacovigilance, so it is important to be nimble, react quickly, and strive to not stifle innovation. As part of this conversation, global forums should be organised to discuss and collaborate. Trancelerate's important work as a cross-industry consortium on intelligent automation, validation, and guidance documents is an example of such a forum. Making sure that recommendations for use of AI and machine learning in pharmacovigilance are suitable for all stakeholders is critical. In this regard, the C IOMS w orking group on AI is a progressive step forward. Patient safety remains the unwavering focus of all pharmacovigilance activities. AI and machine learning systems offer great promise in improving operational processes, to free human resources for higher value activities and providing insights that might not be possible otherwise. By working together to rethink and harmonise the global regulatory framework, and by focusing on technological outputs rather than the technology itself, the potential of AI and machine learning can be unlocked to enable advances in better understanding the benefit-risk of medicines and vaccines for patients, without compromising on patient safety. That was the article, but I couldn't resist contacting the author for more. Michael Glaser is the Safety Innovation Technology Director of the Global Safety and Pharmacovigilance Systems Organisation at GSK. He joins me virtually from Philadelphia to answer a few more questions on the article. Welcome to the show, M ichael.

Michael Glaser:

Thank you. Thank you, good to be here.

Alexandra Coutinho:

So, to get us started, a key point that stood out to me in the article is this need for a mindset change in how we think about AI and machine learning when deciding how they will be used in our systems and processes. H ow much of the pharmacovigilance cycle currently involves artificial intelligence and machine learning?

Michael Glaser:

So today AI and ML is utilised routinely in PV processes. So me examples include automated monitoring and the review of social media, natural language processing for screening scientific literature, identifying medication errors in unstructured text and language translation. There are l ots of different use cases today. And I think these examples are really interesting because AI and ML are being used to solve very specific, targeted problems for PV. Other PV uses to kind of think about and mention, it really is the use of machine learning in particular, to find insights in data and recently, over the last year, there has been a tremendous surge in experimentation being conducted with G enerative AI. I firmly believe that the routine use of AI and ML exists today within PV and is growing.

Alexandra Coutinho:

Wow, you can kind of sense that happening in a lot of many different areas, not just pharmacovigilance. So, even though some form of AI and machine learning is already being used in the pharmacovigilance cycle, with the rapid progress in AI and machine learning, it becomes important to change the way we think about these technologies. What mindset change needs to happen to ensure proper implementation of AI into pharmacovigilance?

Michael Glaser:

I think there's a couple of things to talk about here. First is really to be able to move past a fear of the unknown, and you know, I work for GSK and we are a science company. Asking questions and exploring potential solutions is a key part of our corporate DNA. We really try and apply that same type of philosophy around asking questions and exploring answers for exploring AI and machine learning for uses within PV. At GSK, we have a philosophy that we refer to as small bets. The idea is to make a small investment of time, resources, or even re refunding into exploring a particular solution for a problem. So we take this mindset and it really prompts us to ask questions such as, you know, can AI or ML solve a problem? And we then are driven to really explore potential ideas and solutions for that problem. So I think that's important. The second is really something that we referenced in the article. We want to openly contemplate moving past what we've called a white box mindset, for AI and ML algorithms, and what we mean by that is we want to stop deeply scrutinising the algorithm itself, and really start thinking about the algorithm as a black box, something that none of us get to look inside, really focusing our attention and our efforts on understanding what goes on outside that box. Understanding the inputs, making sure that the inputs are a representative sample to define the problems that we're looking at. Making sure that those inputs are unbiased. Understanding the outputs of the algorithm. Thinking about when the outputs might go wrong and how to correct or identify mistakes. Focusing on the risks involved and assessing the benefits of implementing AI and ML into the PV processes when really thinking and working through those risks. Within PV, we have a broad set of processes that already defined. We refer to them as a quality management system or a QMS, where we manage these risks and take appropriate actions. So we're thinking about again reimagining our view of the algorithm and leveraging these existing processes together.

Alexandra Coutinho:

With your focus on the data that goes into the AI model, it seems like that not only do we need to focus on the nature of its input and the reliability of the input, but there is a problem that comes into working with data with regards to privacy and security of user data. So, knowing that AI requires training on this data to ensure its usefulness for our purposes, to my mind it seems like we need to strike this balance of making data available to train AI models on, but this also needs to be balanced with our responsibility of protecting their data. So how may we strike this balance?

Michael Glaser:

I think this is really an important topic, to think about it in terms of striking the balance and when I think about data, let me first say that everything we do within pharmacovigilance has to be focused on patient safety. That really is the paramount and the forefront to everything we do. So if I think about that focus, I really believe that we can both protect patient privacy while making the patient safety data available for the analysis as part of the training process, the algorithm. It really becomes critical to appropriately govern that, w hat I think of as the term " reuse" of safety data, or using safety data outside of the routine PV activities, you know, like ICSR reporting or signal detection. So keeping that decision about reusing safety data laser focused on patient safety, and not using the information, the data, to go and explore other things like commercial proposals or monetising that data. Again, laser focused on how and under what circumstances do we use this patient safety data to improve patient safety. That focus is important. There are steps you can do to make sure that patient privacy is protected, that data is adequately protected. Anonymisation, for example, I think, is one mechanism where we can use just as a tool to kind of strike that balance between patient privacy and having a robust data set for algorithm training.

Alexandra Coutinho:

A lot to consider with regards to this input data. The article also talks about how hard it is to implement AI into pharmacovigilance activities because it relies so much on legal regulations that govern how activities are conducted across the entire pharmacovigilance cycle, and these . frameworks are complex and complicated by variations existing worldwide. So you write that global harmonisation can address this. How do we achieve this harmonisation when there are different legal regulations affecting the healthcare industry as opposed to healthcare organisations such as National Pharmacovigilance Centres?

Michael Glaser:

It's complicated, I think, is really the short answer. So let me say I'm not a legal expert and I'm not a regulatory expert. Okay, when I think of how we work towards harmonising these regulations and making things easy for all of us, I think this conversation and the article that we're talking about hopefully work to bring the conversation and the problem around harmonisation and why harmonisation is good for us all, b ring that into the open. I think that's really a critical first step. One of the things we wrote in the article was advocating bringing all of the parties together for an open dialogue centred around how do we work together, how do we find common positions? And as a piece of that, you start to think about collaboration, and collaboration is already being pursued by international working groups such as the Council for International Organisations of Medical Sciences, the CI OMS group. They have a working group on AI in pharmacovigilance. So, again, bringing together the conversation, being open about it, driving forward with forums where we can all just collaboratively talk and work together on the best ways to utilise AI and ML, with that laser focus on patient safety. I really think that's the important thing is the dialogue to get us going.

Alexandra Coutinho:

Hopefully we can get started with that dialogue with this interview.

Michael Glaser:

Right, absolutely.

Alexandra Coutinho:

So, to ensure trust in AI and machine learning technology, you also suggest making use of existing risk-based pharmacovigilance processes. What does that mean?

Michael Glaser:

So, when we think about risk-based PV processes, you step back. You identify the risks that are involved, you put steps in place to minimise those risks, you correctly monitor the risks as they may or may not happen, and h opefully they don't materialise. Y ou develop an action plan to respond to the risks in advance and you're transparent about it. It sounds simple. It's a lot harder to do than that, but these processes already exist within PV. So one of the things we think about is that, by utilising that same set of processes around risk ideation and mitigation and planning, we can use those processes to ensure and drive trust in emerging technologies like AI and ML.

Alexandra Coutinho:

Can you give any other examples of how this could work? Does such an example already exist, for example, in either GSK or other organisations that you have worked with in the past?

Michael Glaser:

Sure, let me use language translation as an example and I'll preface the example that I'm a native English speaker. I have a very, very small amount of French, a few dozen words, that's it. So yeah, at GSK, the organisation is around the world, so I think language translation for me becomes an important and useful example. In the first use case, maybe I want to think about using AI or ML to translate my conversational English into French to better connect with one of my colleagues who is a native French speaker. Again, it's conversational. So when I think about this particular use case, the quality and the accuracy of the translation that I'm doing with the technology, I really want to think about it as low risk. If a few words are mistranslated or an improper word tense is used, the likelihood that my native French-speaking colleague will misunderstand me or the impact really of that incorrect translation on our conversation, it's just small. So, again, low risk. And if, in this situation, I think I would wholly rely on the technology to do the translation, I wouldn't need, you know, it would be myself, the technology and my French speaking colleague. We wouldn't have an independent or third party bilingual individual to help with that translation. Again, it's a low risk situation. So the flip side to that is thinking about language translation. What if I wanted to translate an incoming adverse event report from French to English? Well, and let me also assume for a moment that I've got some history with the particular AI or ML tool that I'm using to do this French to English translation and based on that experience, I know that sometimes, just occasionally, that the translation tool will make a mistake around patient sex. That is really very impactful to the PV process, mistranslating a patient's sex. So in this case, that's a higher risk to something occurring incorrectly in the process, more impactful to me. So in this case, what I really may want to do is have a human reviewer double checking the translation. Again, I think I still want to use technology in that process, assisting with the process, gaining efficiency, really thinking of it as the human review works in tandem with the technology. But keeping the human in the loop is critically important because, again, as I said, it's a high-risk situation. The cost and the impact of the mistake is critical. So I think about it in that kind of way is that I've taken the situation, I've assessed the risk, developed the plan to manage the risk, particularly when it's impactful, and really this just fully aligns with the set of long established processes within PV to utilise a QMS or that quality management system. So hopefully that long-winded example starts to make a little bit of sense.

Alexandra Coutinho:

Yeah, I think that your example touches on already quite a lot of what is already being spoken about with regards to using AI with our work. You know the fear of AI replacing a lot of our roles and our responsibilities, that that is actually not the case, that we will still be needed to check that it's doing its job properly, considering the fact that we don't know how it works, that it is this black box, so then it becomes even more important for us to check its output and be even more diligent about checking its output.

Michael Glaser:

Absolutely absolutely.

Alexandra Coutinho:

Right, just one final question. So a lot of our listeners might be working for regulators or healthcare providers as opposed to companies. How would this AI assessment process differ for companies compared to nonprofit or healthcare organisations?

Michael Glaser:

So, as we've been talking, I think that fundamentally, a risk-based approach to implementation and the strong governance model are needed to make sure that we can trust really these amazing advances being brought forward with AI and machine learning and have confidence that the technologies are doing what we expect them to do. Pharmaceutical companies we have explicit regulations that we have to adhere to and, as such, pharmaceutical companies and, as we've talked, regulators need to have this open dialogue and having that dialogue and working together is really key to being able to recognise the benefits of technology. Nonprofits and healthcare organisations can utilise the same collaborative framework and the same QMS/ risk-based approach that we talked about. I think maybe a difference might be around risk of inspection, but it really is up to those organisations, the nonprofits and other healthcare organisations, to join the dialogue, and I welcome that. I really want all of us together to have an interest in being able to utilise technology for patient safety, to sit down and have these conversations. I think that's, again as a broad collaboration, t That's how we can move together really successfully.

Alexandra Coutinho:

Yeah, as I said, I hope that this interview and the article encourages or at least starts some of these conversations, especially with regards to these organisations that already exist to collaborate and to create these spaces for these discussions to happen.

Michael Glaser:

I do too, I do too, thank you.

Alexandra Coutinho:

Well, that's all the questions that I have for you, Michael, Before we do go. Is there any last thing that you want to put out there that you weren't able to maybe put into your article or speak about yet?

Michael Glaser:

Just to kind of repeat myself, I think collaboration, open dialogue and a risk-based approach, these are the keys. I think of them as pillars to maybe a three-legged stool, If I can make a wild analogy. I think they're all important and they have to work together, and that really just extends to all of us with an interest in patient safety, continuing to collaborate, continuing to talk and working together, and with that laser focus on how do we benefit patients. I think it all comes together and it's really critical.

Alexandra Coutinho:

An optimistic look into the future and very well summed. Thank you, Michael, for agreeing to do this interview and for making the time to speak to me. It was such a great article, so it was really exciting to be able to then get you in for a little bit further talk about this.

Michael Glaser:

Thank you very much. Thank you for having me. I really appreciate the opportunity to again to have the dialogue and, you know, always open to the discussion around it. I think it's to continue to repeat myself I think the conversation, having the conversation, is just important. So thank you.

Alexandra Coutinho:

That's all for now, but we'll be back soon with more long reads, as well as our usual in-depth conversations with medicine safety experts. If you'd like to know more about the use of artificial intelligence and pharmacovigilance, check out the episode show notes for useful links. For more stories like this one delivered straight to your inbox every month, s ign up for our free newsletter at uppsalareports. org/ Subscribe. If you like our podcast, s ubscribe to it in your favourite player so you won't miss an episode, and spread the word on social media so other listeners can find us. Uppsala Monitoring Center is on Facebook, L inkedin, and X, and we'd love to hear from you. Send us comments or suggestions for the show or send in questions for our guests next time we open up for that. For Drug Safety Matters, I'm Alexandra Coutinho. I'd like to thank Michael Glaser for his time, Mat thew B arwick for post-production support, and of course, you for tuning in. T ill next time.

Intro
Article read
Welcome, Michael!
Where AI and ML are being used in pharmacovigilance processes today
We need to change the way we think about AI to harness its full potential
How to handle patient data to ensure patient safety while making data available for training AI models
Achieving harmonisation in AI regulations across industry and organisations
What is a risk-based framework and how can it be used to ensure trust in AI when used for pharmacovigilance
How healthcare organisations may utilise these risk-based frameworks
Outro