'Lightspeed' was the name of a project which saw the COVID-19 vaccine brought to market at record speed. However, exceptional projects of this kind should not hide the fact that the development of new drugs generally only progresses at a snail’s pace. It takes an average of 13 years from the idea to the first approval. One of the challenges here is the huge amount and variety of decentralised data that has to be analysed in clinical drug studies – all strictly regulated by laws and industry standards.
The challenge of data analysis
The increasing volume of data at research-based pharmaceutical companies over the past few years has sparked new demands for greater storage capacity that allows a fast and secure management process. In addition to the sheer volume of information, pharmaceutical companies also manage a diverse pool of structured data, such as dosage, blood values or liver values, alongside unstructured data like photos, X-rays or computed tomography scans. The information, gathered from test results, often includes personal information about subjects and patients, organisational data and the comparison with existing research results – and this, depending on the study program, can cover hundreds or even thousands of subjects.
Data management is even more complex in Phase IV clinical trials, where studies aim to detect the longer term side effects of a newly launched drug, as those may include hundreds of thousands of participants. As part of such trials, pharmaceutical companies can often also cooperate with partners such as universities and contract research organisations, meaning that the data is generated across several organisations. As a consequence, there are specific requirements with regards to data storage, access, distribution and analysis, in order to allow research companies and institutions to work with the data collectively – sometimes across national borders – and in compliance with data protection regulations.
Moreover, to accelerate time to market, it is crucial for pharmaceutical companies to be able to evaluate the large amounts data generated by various studies as quickly as possible. This, however, often comes as a challenge for companies as no computing platform is equally suitable to handle large and complex workloads. A high-performance computing (HPC) system, or 'supercomputers' for example, can be ideal for performing pharmacokinetic modelling with certain types of software, while a machine-learning analysis can run significantly faster with others.
New data architecture required
Research organisations therefore need a data architecture that can work with different software and computing capacities to quickly and efficiently analyse and manage large amounts of information with the most suitable computing capacities – be it in the data centre of a pharmaceutical or with a partner company, or in the cloud.
The data architecture must be scalable and able to logically combine different points and databases into a single data pool [see image 1]. This allows research teams from different organisations to access the information – while also restricts availability for those that require special approvals. This multi-tenancy data approach must be accompanied by a holistic security model, which ensures the company follows the standards and good practice guidelines established by different regulatory agencies.
Image 1 - A so-called data fabric unifies distributed and heterogeneous data sources into a single namespace. It is the hub through which research locations, cloud services and partner companies are integrated into a data cycle – either as suppliers or as recipients of data and analyses. (Source: HPE)
The ability to move applications between different computing environments to support the system during peak times, for example, can also bring enormous advantages to pharmaceuticals. It allows applications to be hosted closer to the operating system, enabling a seamless migration of workloads, for example, to the cloud and back again. This ultimately enables all partners that take part of the pharmaceutical research to have access to the libraries and dependencies without having to manage each platform individually.
In terms of costs, another factor represents a great opportunity for pharmaceutical companies – majority of IT providers are switching to offering hardware resources, such as computers, storage and networks, as a service not only via the cloud, but also on premises. On-demand license models enable companies to obtain resources at short notice and pay for them based on actual usage [see image 2]. This essentially replicates the benefits and agility of the public cloud into the corporate data centre.
Image 2 - IT providers are moving towards providing IT infrastructure as a service not only via the cloud, but also locally at the customer's premises. The resources can be obtained at short notice and are paid for based on usage. This extends the agility of the public cloud into the corporate data centre. (Source: HPE)
For Phase III trials, for example, where the new drug is compared to the standard-of-care drug as part of a double-blind study with 2,000 test subjects, a university clinic can use an in-house high performance computing system, which is specifically optimised for its internal analysis applications. The system would allow several specialised laboratories to enrich certain data pools with their own laboratory results via remote access.
For an AI-based evaluation of tomography scans, for example, the clinic can use temporarily activated capacity provided by the system, while for statistical comparison of the results and the search for abnormalities, it would run a machine-learning application, which is hosted on a private cloud at a local service provider.
In this scenario, there two key factors that save valuable time-to-market for the launch of the new drug. Firstly, the use of the most suitable computing environment in each case, and secondly, the ability to carry out workloads externally in parallel, quickly and in accordance with all rules and regulations. As a result, thanks to on-demand licensing of the required storage and computing power, the company retains full control over costs – without having to make heavy one-off investment.
Faster to market readiness
The right data architecture can significantly shorten the time to market launch – ideally in combination with three other factors. Firstly with flexibility to make applications available at the required location and to avoid overloading local computing systems; secondly, with a multi-tenant security architecture for compliant collaboration between the research community; and thirdly, with consumption-based payment for computing, storage and network resources. With this approach, a pharmaceutical company can monetise the results of its research work sooner – and capitalise on this model in the long-run. The faster market launch of new drugs is not rocket science, but simply the result of innovative sourcing concepts and an intelligent data architecture.
Mike Douse is Digital Transformation advisor, Hewlett Packard Enterprise