The AI will see you now! Powerful Google tool is almost as good as human doctors in giving answers to basic ailment questions
- The tech giant’s software answered medical questions with 92.6% accuracy
- Researchers said it could be used for medical helplines like NHS 111 in the future
Family doctors already have patients turning to ‘Dr Google’ for a diagnosis.
But Google has now developed AI which could perform as well as a doctor when answering questions about ailments.
The tech giant reports in the journal, Nature, that its latest model, which processes language similarly to ChatGPT, can answer a range of medical questions with 92.6 per cent accuracy.
That is on a par with the answers provided by nine doctors from the UK, US and India, who were asked to respond to the same 80 questions.
Researchers at Google say the technology does not threaten the jobs of GPs.
Google has now developed AI which could perform as well as a doctor when answering questions about ailments. But researchers at Google say the technology does not threaten the jobs of GPs
The tech giant reports in the journal, Nature, that its latest model, which processes language similarly to ChatGPT, can answer a range of medical questions with 92.6 per cent accuracy
But it does provide detailed and accurate answers on questions like ‘can incontinence be cured?’ and the foods to avoid if you have rosacea.
That could lead to it being used for medical helplines like NHS 111 in the future, the researchers suggest.
Dr Vivek Natarajan, senior author of a study on the AI programme, called Med-PaLM, said: ‘This programme is something that we want doctors to be able to trust.
‘When people turn to the internet for medical information, they are met with information overload, so they can choose the worst scenario out of 10 possible diagnoses and go through a lot of unnecessary stress.
Read more: DR ELLIE CANNON asked a robot doctor for help with ‘restless legs’ and was shocked when the cyber medic advised addictive painkillers
‘This language model will provide a short expert opinion, which is without bias, which cites its sources and expresses any uncertainty.
‘It could be used for triage, to understand how urgent people’s condition is, and to bump them up the queue for medical treatment.
‘We need this to help when we have a lack of expert physicians, and it will free them up to do their job.’
The Med-PaLM artificial intelligence programme was adapted from a programme called PaLM, which was expert at processing language but had not been specifically trained on health.
Researchers carefully trained the AI further to give it more high quality medical information and teach it how to communicate uncertainty when it had gaps in knowledge.
The programme was trained on doctors’ answers to questions, so it could reason properly and avoid giving information which might cause a patient harm.
It had to meet a benchmark called MultiMedQA, which combines six datasets of questions on medical subjects, scientific research and medical consumer queries, as well as HealthSearchQA – a dataset of 3,173 medical questions which people searched for online.
Med-PaLM only gave answers which risked potential harm to a patient on 5.8 per cent of occasions, the study, published in the journal Nature, reports.
That is also comparable to the rate of potentially harmful answers given by the nine doctors surveyed, which was 6.5 per cent.
There is still a risk of ‘hallucinations’ within the AI – meaning it could make up answers without data behind them, for reasons engineers do not completely understand, and the technology is still being tested.
But Dr Natarajan said: ‘This technology can answer questions doctors are given in medical exams, which are really hard.
‘It is really exciting and doctors do not need to fear AI is going to take their jobs, as it will instead simply give them more time to spend with patients.’
However James Davenport, Hebron and Medlock Professor of Information Technology at the University of Bath, said: ‘The press release is accurate as far as it goes, describing how this paper advances our knowledge of using Large Language Models (LLMs) to answer medical questions.
‘But there is an elephant in the room, which is the difference between ‘medical questions’ and actual medicine.
‘Practising medicine does not consist of answering medical questions – if it were purely about medical questions, we wouldn’t need teaching hospitals and doctors wouldn’t need years of training after their academic courses.’
Source: Read Full Article