Abstract
Recent years have witnessed increased interests in scene text recognition (STR). Current state-of-the-art (SOTA) approaches adopt sequence-to-sequence (Seq2Seq) structure to leverage the mutual interaction between images and textual information. However, these methods still struggle to recognize texts in arbitrary shapes. The leading cause is that it brings about information loss and negative noises when directly compressing two-dimension image features into one-dimension vectors. This paper proposes a novel framework named hierarchical refined attention network (HRAN) for STR. HRAN obtains refined representations with the hierarchical attention, which localizes the precise region of current character from two-dimension perspective. Two novel co-attention mechanisms, stacked and guided co-attention, explicitly leverage dependency between spatial-aware contextual features and region-aware visual features without extra character annotations. Experiments show that both on regular and irregular texts, HRAN achieves highly competitive performance compared to SOTA models.
Original language | English |
---|---|
Pages (from-to) | 4175-4179 |
Number of pages | 5 |
Journal | Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing |
Volume | 2021-June |
DOIs | |
State | Published - 2021 |
Externally published | Yes |
Event | 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada Duration: 6 Jun 2021 → 11 Jun 2021 |
Keywords
- Co-attention mechanism
- Contextual features
- Hierarchical refined attention network
- Scene text recognition
- Visual