Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for Ophthalmic Images in Glaucoma

Min Shi, Anagha Lokhande, Mojtaba S. Fazli, Vishal Sharma, Yu Tian, Yan Luo, Louis R. Pasquale, Tobias Elze, Michael V. Boland, Nazlee Zebardast, David S. Friedman, Lucy Q. Shen, Mengyu Wang

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


Ophthalmic images, along with their derivatives like retinal nerve fiber layer (RNFL) thickness maps, play a crucial role in detecting and monitoring eye diseases such as glaucoma. For computer-aided diagnosis of eye diseases, the key technique is to automatically extract meaningful features from ophthalmic images that can reveal the biomarkers (e.g., RNFL thinning patterns) associated with functional vision loss. However, representation learning from ophthalmic images that links structural retinal damage with human vision loss is non-trivial mostly due to large anatomical variations between patients. This challenge is further amplified by the presence of image artifacts, commonly resulting from image acquisition and automated segmentation issues. In this paper, we present an artifact-tolerant unsupervised learning framework called EyeLearn for learning ophthalmic image representations in glaucoma cases. EyeLearn includes an artifact correction module to learn representations that optimally predict artifact-free images. In addition, EyeLearn adopts a clustering-guided contrastive learning strategy to explicitly capture the affinities within and between images. During training, images are dynamically organized into clusters to form contrastive samples, which encourage learning similar or dissimilar representations for images in the same or different clusters, respectively. To evaluate EyeLearn, we use the learned representations for visual field prediction and glaucoma detection with a real-world dataset of glaucoma patient ophthalmic images. Extensive experiments and comparisons with state-of-the-art methods confirm the effectiveness of EyeLearn in learning optimal feature representations from ophthalmic images.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalIEEE Journal of Biomedical and Health Informatics
StateAccepted/In press - 2023


  • Feature extraction
  • Image segmentation
  • Ophthalmic image
  • Optical distortion
  • RNFLT map
  • Representation learning
  • Retina
  • Task analysis
  • Visualization
  • artifact correction
  • glaucoma
  • representation learning


Dive into the research topics of 'Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for Ophthalmic Images in Glaucoma'. Together they form a unique fingerprint.

Cite this