Dearth of ophthalmologists, high equipment cost, and time-consuming manual image scoring methods restrict access to sophisticated diagnosis. In response, we host automated tools for accurate high-throuput image analysis.

We shall also host an online repository of image datasets for running standardized tests on competing algorithms. Particularly, anonymized patient records will be annotated, and used as standard datasets.


Automated tools

Noting the limitations of the generic image processing tools, we have been working on developing problem specific tools. In particular, we already developed analytic tools associated with recent imaging modalities including OCT, Fundus Photography, and AOSLO envisioning accurate understanding of structural changes and associated diseases corresponding to various structures of the eye.

Standardised Evaluation

In current practice, various automated algorithms are developed based on disparate datasets and comparing the performance of those is a major difficulty. In this connection, we propose to have standardized evaluation criteria. In particular, we developed a methodology for thorough statistical analysis based varitety porformance metrics.

Encouraging Contibutors

We encourage other clinicians/image processing researchers to contribute data and software to the repository and allow users to use their contributions freely. This would help in (i) covering majority of diseases (ii) providing tools to analyze untouched imaging modalities and (iii) replacing existing tools with improved ones.

Building Synergy

Automated tools

We have made significant progress in developing various image analytic algorithms including tools for

  • Choroid layer (thickness, volume & light dark) quantification using OCT images
  • Retinal IS/OS damage quantification using en face OCT images
  • 3D visualization and quantification of detachment post DSAEK using AS-OCT images
  • Iris volume quantification using AS-OCT images
  • Automated cone counting AOSLO images


It is crucial to have standard metrics to evaluate algorithmic performance. To this end, we introduce statistical quotient measures based on various metrics which helps in comparing (i) algorithmic performance vis-à-vis manual performance (ii) algorithms tested on different datasets. Accordingly, only algorithms that pass the desired performance criteria will be suggested for clinical deployability.