“I’m sorry, all our phone operatives are busy at the moment. If you want to listen to some irritating music while you wait, please press one. If you would prefer a Chopin nocturne please press two, but be advised that it might be dawn by the time we pick up. If you are annoyed at being asked to decide between so many options and want to leave some loud expletives, please press three. Your message will be recorded for training and monitoring purposes so it is probably wise not to leave any contact details. If you have an emergency, please press four, but you might as well know that you will go into the same queue as all the rest. On the other hand, you might prefer to ring back later and chance your luck that we will pick up straight away then.”
OK, that’s a bit unfair: I know that all of our colleagues across the country are all ultra-busy and that if they could answer the call immediately, they would. If you ring my own number, you’ll get my voice asking you to leave a message. I have to say that my automated response is only five seconds long, so I don’t keep you hanging on for long, but it might take me ten thousand times as long to answer. If you think that is outrageous, do remember that 50,000 seconds is the equivalent of only 14 hours so I do hope to get back to you on the same day.
But it is jolly annoying to be left in a queue, isn’t it? Especially when you’re told you are at position 11! Maybe we need to start using AI to deal with these calls.
Could a device ask what clinical signs are evident and start to arrive at a diagnosis? Or at least prioritise the case which needs to be seen most rapidly while clients are in the phone queue?
I’ve been so impressed with the use of AI in ophthalmology. In human medicine, a retinal specialist at Moorfields Eye Hospital, Pearse Keane, collaborated with AI firm DeepMind to create algorithms for the earlier detection of retinal disease, using hundreds of retinal images and associated information provided by the Moorfields specialists. “Develop a machine that’s better than any of them,” they reckoned. So, they did that very thing.
The Moorfields–DeepMind collaboration said their machine could tell the sex of the patient (Korot et al., 2021). “No! Surely not? Nobody can tell that from looking at the retina.” But as well as the diagnosis, the computer had been given the sex and age of the patients, so it worked out who was male and who was female from some aspect of the retinal image. Nobody can tell what exactly it is recognising that is different between them, but it can – and the patient’s age too, to within three years.
The AI system was better than any of the individual ophthalmologists at diagnosing conditions affecting the retina (De Fauw et al., 2018) – not better than all of them in a room together discussing the tricky cases between them, but it was a tremendous advance in technology. Could we do the same in veterinary medicine? Imaging, I guess, would be a perfect place to start.
Emilie Boissady and colleagues developed an AI system that evaluated thoracic radiographs which, after training on over 22,000 radiographs, was better than general practice veterinarians in defining conditions such as cardiomegaly, left atrial enlargement, and so on (Boissady et al., 2020), although the group did not ask the AI system to come up with specific diagnoses of conditions.
Monitoring ambient temperature of the udder, claw and eye can give valuable information about mastitis, lameness and general stress and welfare in a herd of dairy cattle. But there is just too much data for human analysis – here an AI system can be really valuable. The same thing is the case for cervical smear diagnosis in human pathology. Machine learning is revolutionising such routine tasks – all of which could make one nervous about machines taking over our skill sets.
“Machine learning and veterinary pathology: be not afraid!” writes Krista La Perle in a 2019 issue of Veterinary Pathology (La Perle, 2019). Even though computer algorithms can extrapolate patterns and expose correlations, they always need to be trained and they are unable to identify causal links. What they can do is take away the tedium of routine analysis.
What can you think of in your clinic that could be done better by a computer? Just think of the tasks such as medication ordering, stock control, calculating client bills, etc, that are now all done by computer. What’s next?
Having said that, though, I work with some practices still relying on hard copy which is great when I want to draw a picture of an eye and its lesions, though not so good when they have to decipher my handwriting – there are pluses and minuses to everything!