AI retinal scanning offers vision of the future for care of diabetics

Technology is poised to revolutionise eye screening and role of ophthalmologists


In Ireland 6.5 per cent of adults aged between 20 and 79 live with diabetes. Unfortunately, the condition is the leading cause of blindness among adults worldwide.

Diabetics can register for free retinal scans to keep this in check and they are encouraged to do so regularly. There are no publicly available figures for the Republic but, in the United States, compliance with going to the ophthalmologist for an annual screening is quite low: between 33 and 50 per cent.

What has this got to do with technology? Microsoft's chief medical officer Dr Simon Kos says artificial intelligence (AI) has the potential to improve both compliance and the accuracy of retinal imaging: "[In the US,] patients have to turn up to the ophthalmologist's office and it takes two to three hours because they dilate their eyes with these drops and you can't drive afterwards. From a patient experience perspective, it's a real inconvenience, hence the poor compliance.

“I’ve been working with a business partner here in the US called Iris [Intelligent Retinal Imaging System] and they have created an ophthalmic visit in a box. It’s actually a combined hardware and software appliance; you pop your chin into a chin strap and a little voice guides you through taking a perfect picture of the back of your eye in a few minutes.”

READ MORE

Dataset of imagery

What happens next is that, instead of the image being looked at by a single ophthalmologist, it is sent into the cloud where an entire pool of ophthalmologists can carry out a quick triage. The image can be returned with a report within hours. This has been happening for the past few years in conjunction with AI; a dataset of imagery interpreted by ophthalmologists has been used to train a machine-learning model.

“This is a different model of care: no one is making an appointment with a single ophthalmologist. [In the near future] you could turn up to this appliance, which might be in a GP’s office, a pharmacy or even a supermarket, and an interpreted image is sent directly back to your GP and you for you to discuss and monitor progress,” explains Kos.

When Iris started out last year, it had an accuracy rate of correctly interpreting these retinal scans of about 85 per cent. In comparison, an ophthalmologist has an accuracy level of 92 per cent. Aha, I hear you say, AI will never best humans; why bother using machines when people are better?

Iris revised its algorithm in February and accuracy is now up to 97 per cent. That means it’s faster, cheaper and more accurate to get a computer to read a retinal image than it is to get a human to do it.

“The FDA [US Food and Drug Administration] in April just cleared the first device that reports diabetic retinal images without a doctor in the loop,” adds Kos.

Similarly, Google DeepMind announced earlier this week that it has developed a new AI system capable of diagnosis and referral for over 50 retinal diseases. Like Iris, DeepMind’s deep learning model was trained on already diagnosed retinal images and now has a high level of accuracy: 94 per cent. It is set to be rolled out across 30 different hospitals across Britain thanks to DeepMind’s partnership with Moorfields Eye Hospital in London.

The time it takes for an ophthalmologist to analyse these kinds of scans combined with the sheer workload on a daily basis can “lead to lengthy delays between scan and treatment – even when someone needs urgent care”, explained DeepMind’s co-founder and head of applied AI, Mustafa Suleyman.

With both Microsoft and Google in the game, artificial intelligence is set to have a dramatic effect on retinal disease worldwide.

And so we have it – the beginning of AI – enabled healthcare that, in some cases, takes the medical professional out of the loop. Surely there will be some push back from the ophthalmologists?

“The first reaction was: ‘Computers are going to take my job; what about me?’,” says Kos. “I think that’s evolving into a more mature realisation that ‘AI is not going to take my job: it is the doctors working with AI that are going to take my job if I don’t start using it too’.”

Repetitive tasks

Similarly, Microsoft Research in Cambridge has a project called

that uses artificial intelligence to do in seconds a job that typically takes a radiographer or oncologist hours to get through. As Kos points out, this is taking what is essentially administration – manually creating a 3D model of a patient’s tumour by marking up hundreds of 2D scans – and freeing up the medical professional to spend more time with their patients.

However, while AI can eliminate time-consuming and repetitive tasks in the healthcare industry, it may also leave some specialists with the task of rethinking their day-to-day job, says Kos, who is not without empathy. He was trained in emergency medicine but his parents are both radiographers.

“I think there are whole diagnostic areas – radiography, pathology, dermatology, ophthalmology – those areas of diagnostics are where artificial intelligence is going to really shine and those practitioners are going to have to figure out what their jobs look like if they’re no longer on the diagnostic side of things.”

While these are easily demonstrable examples of AI providing high-tech solutions for treating chronic and acute illness, there is the on-the-ground task for hospitals of dealing with paperwork and admissions.

This is where technology can change what Kos refers to as “a last-generation medical system that was organised to deal with people with infectious disease and battlefield trauma”.

“Now we’ve got this epidemic of chronic disease and we’re even turning acute diseases like cancer into chronic disease as well. We’re poorly geared to deal with it. It’s a high-volume mode as opposed to a more acute intense mode our health system is geared around.”

Fast-tracking

The areas in which Kos sees machine learning and neural net algorithms making the most difference right now are in tackling the reduction of admissions and re-admissions for chronic disease, and being able to tag deteriorating patients so they are helped before their condition takes a turn for the worse.

AI can also play a role in fast-tracking newly-discovered treatments or interventions. A study from the British Medical Journal found there is a delay of 14 years between the point when evidence is established in a journal paper and when this translates into actual medical treatment.

It took 44 years for aspirin to come into accepted use in treating patients who have suffered a heart attack. “Now it just seems so basic and almost negligent not to do it,” says Kos.

One reason it takes so long to evaluate interventions outlined in medical journals is the sheer amount of medical data published – a couple of million articles a year. To really stay current, Kos says, a person would need to read non-stop for 28 hours a day.

Microsoft's Project Hanover is hoping to help with this problem: "If you're a medical researcher working in, let's say, the field of genomics, in order to get current with the corpus of medical literature, you will need to do a meta-analysis, which can take weeks, months, sometimes even years."

"Hanover is a natural language processing engine that you can point at [medical libraries such as] PubMed or the Cochrane database and it will go looking for a keyword, for example 'brca1 gene' (a gene mutation linked to breast cancer risk).

"You can then ask it to perform a sentiment analysis for a query such as: 'Does brca1 reduce or inhibit Tamoxifen [medication used to prevent/treat breast cancer]?' and it will carry out an analysis for you and return that information."

All of this sounds genuinely paradigm-shifting and it will hopefully help save lives. But the question many medical professionals will ask is: if I’m not obsolete, do I need to retrain?

“I do think we’ll need to retrain,” says Kos. “As we progress down the track, models of care will and should change with artificial intelligence as an adjunct.

“In general our medical systems, despite being evidence-based, rely heavily on memory-based care. I think this needs to adapt to a more problem-solving type approach [to diagnosis and treatment] where you start to examine the latest evidence in really contextual ways as you’re looking after that patient – I think that’s the model,” says Kos.

Sitting down with patients

One fear patients have is that AI systems will replace doctors, leaving them to interact with machines rather than people and Kos says this is where humans should be focusing their efforts: sitting down with patients, explaining the process and talking them through their treatment options while technology oils the gears in the background.

“There used to be a bit of pushback once upon a time about cookbook medicine [automated care in place of personalised care]; ‘I’m a human, I need to make decisions and no computer can know all of the facts’.

“And while that’s true, as a society, we’ve gotten used to computer assistance. You drive with the GPS in your car. You’re still the driver and you make decisions but it helps you understand where you’re going, or if you’ve veered off track. We use spellcheckers routinely when we’re typing things up.”

We don’t really think about these everyday things as AI agents guiding us but that is what they are.

"And this new partnership is where medicine is headed if we are to make it easier for patients to attend appointments, improve diagnostic accuracy, reduce unnecessary admissions and re-admissions, and generally tackle the epidemic of chronic diseases which the World Health Organisation says currently account for almost 60 per cent of all deaths globally."