What’s up, AI doc?

23rd Apr 2018

Published in PharmaTimes magazine - May 2018

When it comes to diagnosis, AI offers hard-pressed clinicians a way of coping with patient demand – as long as accuracy carries on improving

In early April Babylon Healthcare, a UK company that uses AI to advise patients on the care they need, signed a deal with internet giant Tencent to develop a WeChat app that will allow it to move into the huge Chinese healthcare market. WeChat has 1 billion users, and some of them are already getting used to using AI to diagnose their symptoms, whether at home or within hospitals. China is not alone: AI diagnosis tools are developing rapidly in many countries, helped by regulatory changes in many countries that have turned them into recognised medical devices.

In the UK for example, Babylon already works with the UK’s 111 service in some regions, as a first port of call for people worried about their health. It also operates in Ireland and Rwanda, and is moving into Saudi Arabia. Ada Health, a UK-German company that started by offering AI tools for clinicians, is also moving into the patient interface market too, offering AI-based diagnosis and primary care services via its apps.

In China, meanwhile, Tencent, like its rival Alibaba and IBM’s Watson, has already developed several health apps, often operating via their chat or social media interfaces. They are also working with Chinese hospitals such as Guangzhou Second Provincial Central Hospital to create AI tools. In all China has some 131 companies working on applying AI to the country’s healthcare sector, according to a 2017 report by a Beijing consultancy, Yiou Intelligence. They include names such as PereDoc and iFlytek.

At a hospital level, too, AI-based tools are increasingly being used to help clinicians and laboratory staff to analyse symptoms or to vet scans and samples. Indeed, around one-third of AI health companies are estimated to focus on diagnostics, particularly in areas such as oncology and rare diseases. In 2017, the US Food and Drug Administration issued its first-ever approval for a machine learning app: Arterys Cardio DL, which uses machine learning to analyse MRI images of the heart. Ophthamology is another area where AI is making strides, with companies such as Google and IDx offering tools to diagnose glaucoma, cataracts and other problems.

Getting it right

Indeed, in many ways the hard data provided in diagnostics is more suitable for machine learning than the soft data provided by anxious patients. Most patient apps are currently only a step away from online symptom checking sites or helpline questionnaires, offering a first port of call. But AI’s promise lies in the accuracy of its diagnoses, and how rapidly that can improve. Building AI tools generally involves the programmes scanning thousands of medical records to learn symptom and diagnosis patterns so that they can question patients thoroughly. Many apps also have the ability to analyse patient photographs as well as written or spoken answers, in order to increase their chances of offering good advice.

The rate of progress does appear to be impressive. A 2015 study of symptom checkers by the British Medical Journal found that they came up with the correct diagnosis only 34 percent of the time – though they were better about advising on follow-up care. Now, many app developers are claiming accuracy of over 90 percent – in line with the best doctors and nurses, and arguably higher than the non-clinical staff often used for 111-style helplines. There are very few independent studies to confirm these claims, however – and still plenty of scepticism about whether AI diagnosis is really safe and reliable.

Augmented humans

As a result, most regulators (including the MHRA) still see such devices as initial guidance that needs to be confirmed by a human doctor. Most app providers, including Ada and Babylon, also offer patients the opportunity to book consultations with doctors; they also use clinicians to oversee their apps. In hospitals, meanwhile, AI technology is generally used to augment, rather than replace, clinicians who can correct mistakes – and in the process offer the AI a further opportunity to improve. Indeed such feedback loops are essential if the AI is to improve as needed.

Given this, fears that AI will soon replace clinicians appear to be overblown. Nevertheless, if improved accuracy is one promise of AI, the other is that it will help hard-pushed healthcare systems to cope with surging demand. A government report leaked in January suggests that policy makers are expecting AI-driven technology to be used throughout the 111 service within the next two years, to help deal with the increasing volume of calls. China, meanwhile, sees diagnostic AI as a way to compensate for a lack of well-qualified medical staff as it expands its healthcare system.

There also remains a big question over who will be responsible if the AI makes the wrong diagnosis. For now, the onus lies with the human who is overseeing the diagnosis process. In future, though, the AI processes may well outstrip the human capacity to understand or monitor them. In those cases the legal and moral situation will be muddied, and programmers and AI companies will become liable. There will undoubtedly be cases where a mistaken AI diagnosis causes deaths – as occasionally happens with human doctors too.

Ana Nicholls is director, industry operations at the Economist Intelligence Unit

PharmaTimes Magazine

Article published in May 2018 Magazine

Tags