Research organisations are sitting on mountains of hidden data that could be used to improve research efforts across the sector – if they only shared it

Preclinical data isn’t commonly shared. It’s the data that precedes and supports clinical trials and includes the in vivo and in vitro studies that determine safe doses and safety profiles before a compound can be tested in humans. Sharing this data would accelerate drug discovery in several ways – through improved prediction of in vivo toxicity effects, by increasing the quality of drug candidates and reducing both the attrition of drugs during drug development and the number of animal experiments. 

In addition to boosting efficiency and cutting costs in the early stage of drug discovery, shared preclinical data would provide a repository of data that could be integrated with other data sources to deliver superior, cheaper drugs for patients.

There are very few issues in the sharing of preclinical data. There are no patient confidentiality issues and the data does not require peer review, copy editors or lengthy write-ups. Furthermore, as 84 percent of preclinical work is publicly funded, there is an ethical requirement for the data to be shared in the public domain, particularly when governments have also contributed funds to facilitate data sharing. 

One example of government funding for preclinical data sharing is the eTOX project, which has been jointly funded by the Innovative Medicines Initiative and EFPIA partners. The database currently contains more than 6,300 preclinical studies and has supported more than 50 publications, although perhaps the most notable feature for the future of data sharing is that eTOX has received a SEND (Standard for Exchange of Non-clinical Data) formatted report and is developing a SEND formatting convertor. 

SEND – the common model for presenting data from nonclinical studies – is significant because it addresses the biggest problem in preclinical data sharing – standardisation. Here, the US Food and Drug Administration (FDA) is leading the way, mandating that preclinical data for new drug applications, biologics licence applications and investigational new drugs (IND) applications be SEND-formatted from December 2016. This mandate could be a game changer for data sharing; while the FDA does not currently plan to share the data, and it will only collect a small fraction of preclinical data (1 in 250 compounds are presented for clinical trials), SEND will enforce standardisation. 

Companies such as GSK, Pfizer, Johnson & Johnson, and recently AstraZeneca have also made welcome moves to improve data transparency. GSK has contributed £1 million to the public-private research initiative Centre for Therapeutic Target Visualisation, which aims to harness a variety of information including genomics, proteomics, chemistry and disease biology to accelerate drug discovery.

Shared preclinical data also partly addresses the need for targeted and cheaper drugs, consideration of comorbidities and drug interactions, the repurposing of drugs and the need of efficient drug pipelines. The FDA has acknowledged its value by endorsing a less stringent route to programme design for biosimilars INDs. 

Considering these initiatives, should we have any reason to fear the successful sharing of preclinical data? History suggests we might. Although the registration of clinical trials on was mandated 18 years ago, today estimates suggest that fewer than half of the results from clinical trials are published. The same rationale exists for the need for clinical trial data sharing as it does for preclinical with one exception; in clinical trials it is humans, not animals, that bear the burden of experimentation. Given this, if we fail to successfully exploit human data does the exploitation of in vitro and in vivo data have any hope?

Shared preclinical data partly addresses the need for targeted and cheaper drugsSophia Turner is executive advisor in healthcare data and analytics at KPMG