AI in drug discovery has arrived but how can pharma use it as a force for good?
For the pharmaceutical industry, the rapid growth in AI enabling technologies has opened up swathes of opportunities, but also created challenges.
While an approach of ‘breaking things’ may be an acceptable strategy to Mark Zuckerberg in the technology sector, the nature of healthcare puts significant constraints and obligations on the industry.
AI drug discovery may be considered similar to ‘data mining’. Data mining in clinical trials is when one retrospectively searches for correlations in subsets of data from an investigation.
Adopting this approach can lead to spurious conclusions with respect to an agent’s efficacy. To justify conclusions drawn from unexpected correlations, you often need to undertake an experiment designed to specifically test the hypotheses emerging form the initial work.
AI can use massive amounts of data from prior experiments and running countless simulations and select a patient cohort for which to show an ineffective drug to be effective. It’s basically data mining in the opposite order to what we’ve done it in the past. Let’s call it ‘a priori data mining’.
The question arising concerns the legitimacy of AI generated study designs. Are we gaming the system or just being smart? We think of it like counting cards, technically you are not breaking any rules but will still be thrown out of the casino.
By stacking the deck in your favour, you are more likely to see your drug being prescribed, but you must hope that regulators and doctors don’t catch on if you are selling snake oil.
Occasions do exist when data mining is good first step in drug discovery. AI is very effective when going back through ‘failed’ trials and clinical experience to consider if a drug could be repurposed. It must, however, be followed-up through further investigation.
Moral maze
Since AI can’t yet make decisions with moral or intellectual integrity, it is the manager and scientists of the pharmaceutical companies who need to remain true to the ethos of medical research.
When you remove the human from the drug discovery loop there are some regulatory quandaries.
Research scientists must state why they did what they did, how they did it, and what they found. With AI, it’s going to get more difficult to communicate as AI takes on a ‘life’ of its own. How can we communicate what an AI programme has been up to in the absence of human supervision?
The ghost in the machine is elusive and looking under the hood is a hugely complicated task. Regulators may evolve to become experts in understanding AI research, but those regulators also have a responsibility to ensure their approval decisions are communicable.
It’s therefore essential that there is an improved understanding of the strengths and challenges in AI research approaches.
We’re comparing two immensely powerful forms of intelligence, with neither quite fully understood by the other. Human brains over computational brawn. It does seem that the answer today should be to utilise a combination of both.
AI can generate ideas and humans can test these concepts against their own understanding of biology, before applying them practically across pre-clinical or clinical trials.
Dr Joe Taylor is Principal at Candesic and Dr Leonid Shapiro is Managing Partner at Candesic. Go to candesic.com





