A multimodal transformer to fuse images and metadata for skin disease classification

Gan Cai, Yu Zhu, Yue Wu, Xiaoben Jiang, Jiongyao Ye, Dawei Yang

Research output: Contribution to journalArticlepeer-review

45 Scopus citations

Abstract

Skin disease cases are rising in prevalence, and the diagnosis of skin diseases is always a challenging task in the clinic. Utilizing deep learning to diagnose skin diseases could help to meet these challenges. In this study, a novel neural network is proposed for the classification of skin diseases. Since the datasets for the research consist of skin disease images and clinical metadata, we propose a novel multimodal Transformer, which consists of two encoders for both images and metadata and one decoder to fuse the multimodal information. In the proposed network, a suitable Vision Transformer (ViT) model is utilized as the backbone to extract image deep features. As for metadata, they are regarded as labels and a new Soft Label Encoder (SLE) is designed to embed them. Furthermore, in the decoder part, a novel Mutual Attention (MA) block is proposed to better fuse image features and metadata features. To evaluate the model’s effectiveness, extensive experiments have been conducted on the private skin disease dataset and the benchmark dataset ISIC 2018. Compared with state-of-the-art methods, the proposed model shows better performance and represents an advancement in skin disease diagnosis.

Original languageEnglish
Pages (from-to)2781-2793
Number of pages13
JournalVisual Computer
Volume39
Issue number7
DOIs
StatePublished - Jul 2023
Externally publishedYes

Keywords

  • Attention
  • Deep learning
  • Multimodal fusion
  • Skin disease
  • Transformer

Fingerprint

Dive into the research topics of 'A multimodal transformer to fuse images and metadata for skin disease classification'. Together they form a unique fingerprint.

Cite this