
Low Jian He
ResearchResearch interests
My research interest lies within the domain of Computer Vision and Natural Language Processing.
Research interests
My research interest lies within the domain of Computer Vision and Natural Language Processing.
Publications
Gloss-free Sign Language Translation (SLT) has advanced rapidly, achieving strong performances without relying on gloss annotations. However, these gains have often come with increased model complexity and high computational demands, raising concerns about scalability, especially as large-scale sign language datasets become more common. We propose a segment-aware visual tokenization framework that leverages sign segmentation to convert continuous video into discrete, sign-informed visual tokens. This reduces input sequence length by up to 50% compared to prior methods, resulting in up to 2.67× lower memory usage and better scalability on larger datasets. To bridge the visual and linguistic modalities, we introduce a token-to-token contrastive alignment objective, along with a dual-level supervision that aligns both language embeddings and intermediate hidden states. This improves fine-grained cross-modal alignment without relying on gloss-level supervision. Our approach notably exceeds the performance of state-of-the-art methods on the PHOENIX14T benchmark, while significantly reducing sequence length. Further experiments also demonstrate our improved performance over prior work under comparable sequence-lengths, validating the potential of our tokenization and alignment strategies.
This work tackles the challenge of continuous signlanguage segmentation, a key task with huge implications forsign language translation and data annotation. We proposea transformer-based architecture that models the temporaldynamics of signing and frames segmentation as a sequencelabeling problem using the Begin-In-Out (BIO) tagging scheme.Our method leverages the HaMeR hand features, and iscomplemented with 3D Angles. Extensive experiments show thatour model achieves state-of-the-art results on the DGS Corpus,while our features surpass prior benchmarks on BSLCorpus.