Large language models (LLMs) show great promise in assisting clinicians in general, and ophthalmology in particular, through knowledge synthesis, decision support, accelerating research, enhancing education, and improving patient interactions. Specifically, LLMs can rapidly summarize the latest literature to keep clinicians up-to-date. They can also analyze patient data to highlight crucial insights and recommend appropriate tests or referrals. LLMs can automate tedious research tasks like data cleaning and literature reviews. As AI tutors, LLMs can fill knowledge gaps and assess competency in trainees. As chatbots, they can provide empathetic, personalized responses to patient inquiries and improve satisfaction. The visual capabilities of LLMs like GPT-4 allow assisting the visually impaired by describing environments. However, there are significant ethical, technical, and legal challenges around the use of LLMs that should be addressed regarding privacy, fairness, robustness, attribution, and regulation. Ongoing oversight and refinement of models is critical to realize benefits while minimizing risks and upholding responsible AI principles. If carefully implemented, LLMs hold immense potential to push the boundaries of care, discovery, and quality of life for ophthalmology patients.
Keywords: Clinical decision-making; Large language models (LLMs); Ophthalmology; ethical concerns; legal concerns; patient care; visually impaired.