Can AI Bridge the Patient Education Gap in Age-Related Macular Degeneration?

AI can provide clear, consistent explanations of age-related macular degeneration, but patients value the empathy and personalized interaction that only doctors offer. Combining AI with human care enhances patient education.

Categorized in: AI News Education
Published on: Aug 15, 2025
Can AI Bridge the Patient Education Gap in Age-Related Macular Degeneration?

Can AI Explain Age-Related Macular Degeneration to Patients?

August 14, 2025

Patient education is a vital yet challenging part of managing age-related macular degeneration (AMD), a progressive retinal disease affecting about 200 million people worldwide. This number is expected to reach nearly 288 million by 2040. AMD appears in two forms: dry (atrophic) and wet (neovascular). Many patients experience both forms simultaneously, which can be confusing and calls for clear, straightforward explanations about the disease’s nature and progression.

Challenges in Educating Patients with AMD

Most AMD patients are older and often have low vision, making traditional printed materials less effective. Central vision loss complicates reading and understanding medical information. Studies show that online AMD materials typically require a ninth-grade reading level, while health materials are ideally written at a sixth-grade level. This gap makes comprehension harder and can widen the disconnect between diagnosis and patient understanding.

Clinicians face additional pressure: with only about 10 minutes per patient, it’s tough to deliver thorough education and check if the patient has truly understood their condition. This is where large language models like ChatGPT offer potential. These AI tools can generate simplified explanations quickly, but how patients feel about and understand AI-generated content compared to a doctor’s explanation is less clear.

Study Overview

A small pilot study was conducted with two patients actively treated for wet AMD, both diagnosed with concurrent dry and wet forms. They were chosen to represent different education and health literacy levels: one with limited formal education and the other holding a doctoral degree.

During their clinic visits, a retina specialist first explained their condition verbally, covering causes, disease mechanisms, and outlook. Then, each patient listened to a one-page AI-generated explanation crafted to be clear and jargon-free. This text was read aloud to avoid issues with low vision. Afterward, the patients shared their thoughts through interviews that explored how well they understood each explanation, their trust in the source, emotional response, and overall preference between human and AI education.

Key Findings

  • Lower literacy patient: Found the AI explanation “fairly clear” and appreciated its structure and clarity on the coexistence of dry and wet AMD. However, he preferred the physician’s explanation, valuing the human connection, trust, and personalized interaction. His partner suggested simplifying AI content further and adding FAQs to help patients with follow-up questions. They also favored interactive AI formats like videos over text alone.
  • Highly educated patient: Praised the AI explanation for its thoroughness and detail, which surpassed the doctor’s summary. He liked having time to absorb the information without feeling rushed. Both he and his wife, also treated for AMD, stressed that human interaction must remain central, with AI serving as a helpful supplement. They noted that AI explanations alone lack the dynamic, responsive dialogue essential for full understanding.

Across both cases, the common message was clear: AI can improve the consistency and depth of information, but empathy, trust, and real-time engagement from healthcare professionals remain crucial.

Implications for Patient Education

The study highlights how education and health literacy shape patient responses to AI-generated content. Those with lower literacy prefer concise, personal communication, while more educated patients appreciate detailed, structured information. This underlines the need to adapt educational approaches to each patient’s background and needs.

Delivery methods matter just as much. Patients with visual impairment struggle with written materials, even if simplified. Audio formats or interactive voice tools might be better suited for them.

Despite simplification efforts, AI explanations were still dense and needed real-time clarification. This suggests that current AI language models might overestimate patients’ familiarity with medical concepts. Clinicians should be ready to support and clarify AI content rather than rely on it alone.

Future research should explore larger, more diverse groups and test different AI content formats, like bullet-point FAQs versus narratives, to find what works best for different literacy levels. Customizing AI prompts and incorporating screening tools to gauge patient literacy could improve effectiveness. Expanding this work to other eye diseases could also be valuable.

Conclusion

AI-generated explanations for dry and wet AMD offer a promising supplement to patient education but must be used carefully. They add depth and consistency but cannot replace the human touch that builds trust and empathy. To maximize patient understanding and engagement, educational materials should match individual literacy and vision needs, and physicians should remain actively involved.

Integrating AI into retina care should focus on enriching the physician-patient relationship, empowering patients with knowledge while preserving the personal connection essential for effective medical care.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)