Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

AI chatbot ‘could be better at assessing eye problems than medics’

Artificial intelligence (AI) model ChatGPT could be better at assessing eye problems than doctors, a study has suggested (John Walton/PA)
Artificial intelligence (AI) model ChatGPT could be better at assessing eye problems than doctors, a study has suggested (John Walton/PA)

Artificial intelligence (AI) model ChatGPT could be better at assessing eye problems than doctors, a study has suggested.

The technology could be deployed to triage patients and determine who needs specialist care and who can wait to see a GP, researchers said.

Academics from the University of Cambridge tested the ability of ChatGPT 4 against the knowledge of medics at various stages of their careers, including junior doctors and eye specialists in training.

Some 374 ophthalmology questions were used to train the language model, with its accuracy then tested in a mock exam of 87 questions.

Its answers were compared to those from five expert ophthalmologists, three trainee ophthalmologists, and two unspecialised junior doctors, as well as an earlier version of ChatGPT and other language models Llama and Palm2.

Researchers said language models like ChatGPT “are approaching “expert-level performance in advanced ophthalmology questions”.

ChatGPT 4 scored 69%, higher than ChatGPT 3.5 (48%), Llama (32%) and Palm2 (56%).

The expert ophthalmologists achieved a median score of 76%, while trainees scored 59% and junior doctors scored 43%.

Lead author of the study Dr Arun Thirunavukarasu, who carried out the work while studying at the University of Cambridge’s School of Clinical Medicine, added: “We could realistically deploy AI in triaging patients with eye issues to decide which cases are emergencies that need to be seen by a specialist immediately, which can be seen by a GP, and which don’t need treatment.

“The models could follow clear algorithms already in use, and we’ve found that GPT-4 is as good as expert clinicians at processing eye symptoms and signs to answer more complicated questions.

“With further development, large language models could also advise GPs who are struggling to get prompt advice from eye doctors. People in the UK are waiting longer than ever for eye care.”

Researchers said that while language models “do not appear capable” of replacing eye doctors, they could “provide useful advice and assistance to non-specialists”.

Dr Thirunavukarasu, who now works at Oxford University Hospitals NHS Foundation Trust, added: “Even taking the future use of AI into account, I think doctors will continue to be in charge of patient care.

“The most important thing is to empower patients to decide whether they want computer systems to be involved or not. That will be an individual decision for each patient to make.”

The findings of the study have been published in Plos Digital Health.