Abstract Number: 10

Automated photodamage assessment from 3D total body photography for an objective assessment of melanoma risk

Sam Kahler, Chantal Rutjes, Siyuan Yan, Adam Mothershaw, Zhen Yu, Dilki Jayasinghe, Zongyuan Ge, Mitchell Stark, Monika Janda, Peter Soyer, Brigid Betz-Stablein

Meeting: 2023 Dermcoll

Session Information

Date: -

Session Title: AI in Dermatology

Session Time: -

Aims: Rigorous selection is essential to identify specific cohorts at high risk of melanoma that may benefit from intensive early detection protocols. However, current risk stratification is reliant on subjective, costly, and experience-dependent clinical assessment and self-report. The integration of novel 3D total body photography systems with data-driven machine learning offers the opportunity to objectively report on phenotypic risk factors. We developed a convolutional neural network (CNN) from 3D total body photography images that objectively reports on site-specific photodamage to stratify patients to targeted early detection protocols.

Methods: 3D total body photography of 82 participants with moderate-high risk of melanoma was subdivided into 19,831 images (242 per participant) and annotated as mild, moderate, or severe for both photodamage and pigmentation using a photonumeric scale. Annotation was conducted by two trained students and two trained laypeople with agreement validated using Cohen’s Kappa. The CNN was developed with multi-task learning architecture, using pigmentation as a discriminator, and uncertainty weighting for the tiered annotator training.

Results: Annotation agreement using the photonumeric scale was validated with Cohen’s Kappa indicating substantial agreement between Student–Student (Kappa 0.68), Student-Layperson (Kappa 0.65), Layperson-Layperson annotators (Kappa 0.76). Initial CNN results demonstrate an overall classification accuracy of 78%, including specificities of 82.7% for moderate and 81.5% for severe photodamage image tiles.

Conclusions: Preliminary outcomes suggest the photodamage phenotype can be objectively quantified from 3D total body photography. Images can be linked to the corresponding body region to develop a photodamage body map, and the CNN can be improved with additional training data. Integration of this novel algorithm with 3D total body photography provides an objective assessment of phenotypic risk to assist in the identification of high-risk individuals.