Hierarchical refined attention for scene text recognition

Min Zhang, Meng Ma, Ping Wang

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

Recent years have witnessed increased interests in scene text recognition (STR). Current state-of-the-art (SOTA) approaches adopt sequence-to-sequence (Seq2Seq) structure to leverage the mutual interaction between images and textual information. However, these methods still struggle to recognize texts in arbitrary shapes. The leading cause is that it brings about information loss and negative noises when directly compressing two-dimension image features into one-dimension vectors. This paper proposes a novel framework named hierarchical refined attention network (HRAN) for STR. HRAN obtains refined representations with the hierarchical attention, which localizes the precise region of current character from two-dimension perspective. Two novel co-attention mechanisms, stacked and guided co-attention, explicitly leverage dependency between spatial-aware contextual features and region-aware visual features without extra character annotations. Experiments show that both on regular and irregular texts, HRAN achieves highly competitive performance compared to SOTA models.

Original languageEnglish
Pages (from-to)4175-4179
Number of pages5
JournalProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Volume2021-June
DOIs
StatePublished - 2021
Externally publishedYes
Event2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
Duration: 6 Jun 202111 Jun 2021

Keywords

  • Co-attention mechanism
  • Contextual features
  • Hierarchical refined attention network
  • Scene text recognition
  • Visual

Fingerprint

Dive into the research topics of 'Hierarchical refined attention for scene text recognition'. Together they form a unique fingerprint.

Cite this