Skip to search formSkip to main contentSkip to account menu
DOI:10.1109/TPAMI.2024.3402746 - Corpus ID: 269740862
@article{Xing2024HC2LHA, title={HC2L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding.}, author={Bowen Xing and Ivor Tsang}, journal={IEEE transactions on pattern analysis and machine intelligence}, year={2024}, volume={PP}, url={https://api.semanticscholar.org/CorpusID:269740862}}
- Bowen Xing, Ivor Tsang
- Published in IEEE Transactions on Pattern… 10 May 2024
- Computer Science, Linguistics
This paper proposes Hybrid and Cooperative Contrastive Learning, a holistic approach that exploits source language supervised contrastive learning, cross-lingual supervised contrastive learning and multilingual supervised contrastive learning to perform label-aware semantics alignments in a comprehensive manner.
Figures and Tables from this paper
- figure 1
- table 1
- figure 2
- table 2
- figure 3
- table 3
- figure 4
- table 4
- figure 5
- table 5
- figure 6
- figure 7
- figure 8
- figure 9
- figure 10
- figure 11
Ask This Paper
BETA
AI-Powered
Ask This Paper
BETA
AI-Powered
Unknown Error
An unexpected error occurred. Please try again.
No Answer Found
Ask another question that can be answered by this paper or rephrase your question.
We are still processing this paper
Please try again later.
Question Answering Unavailable
Please try again later.
No Response
The server took too long to answer your question. You can either rephrase your question or wait until it is less busy.
AI-Generated
Thank you for your feedback!
We're sorry, something went wrong while submitting this feedback.
Thank you for your feedback!
We're sorry, something went wrong while submitting this feedback.
Supporting Statements
Our system tries to constrain to information found in this paper. Results quality may vary. Learn more about how we generate these answers.
Feedback?
44 References
- Libo QinQiguang Chen MingSung Kan
- 2022
Computer Science, Linguistics
ACL
Global-Local Contrastive Learning Framework (GL-CLeF) employs contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourages their representations to be more similar than negative example pairs, which achieves to explicitly align representations of similar sentences across languages.
- 22
- Highly Influential[PDF]
- Jacob DevlinMing-Wei ChangKenton LeeKristina Toutanova
- 2019
Computer Science
NAACL
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
- 74,467
- Highly Influential[PDF]
- Libo QinMinheng NiYue ZhangWanxiang Che
- 2020
Computer Science
IJCAI
A data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information.
- 126
- Highly Influential[PDF]
- Tianjun MaoChenghong Zhang
- 2023
Computer Science, Linguistics
INTERSPEECH 2023
This paper proposes a novel cross-lingual SLU framework termed DiffSLU, which leverages powerful diffusion model to enhance the mutual guidance and also utilizes knowledge distillation to facilitate knowledge transfer.
- 2
- PDF
- Xuxin ChengWanshi Xu Yuexian Zou
- 2023
Computer Science, Linguistics
INTERSPEECH 2023
A novel framework termed FC-MTLF is proposed, which applies a multi-task learning by introducing an auxiliary multilingual neural machine translation (NMT) task to compensate for the short-comings of code-switching.
- 7
- PDF
- Bowen XingI. Tsang
- 2023
Computer Science
IEEE Transactions on Pattern Analysis and Machine…
A speaker-aware temporal graph (SATG) and a dual-task relational temporalGraph (DRTG) is proposed to facilitate relational temporal modeling in dialog understanding and dual- task reasoning and outperform state-of-the-art models by a large margin.
- Bowen XingI. Tsang
- 2022
Computer Science
EMNLP
A novel model termed ReLa-Net, which surpasses the previous best model by over 20% in terms of overall accuracy on MixATIS dataset and also proposes the label-aware inter-dependent decoding mechanism to further exploit the label correlations for decoding.
- 14 [PDF]
- Bowen XingI. Tsang
- 2022
Computer Science
EMNLP
A novel model termed Co-guiding Net is proposed, which implements a two-stage framework achieving the mutual guidances between the two tasks, and outperforms existing models by a large margin.
- 25 [PDF]
- Shining LiangLinjun Shou Daxin Jiang
- 2022
Computer Science, Linguistics
EMNLP
This paper proposes to model the utterance-slot-word structure by a multi-level contrastive learning framework at the utterances, slot and word levels to facilitate explicit alignment and develops a label-aware joint model leveraging label semantics to enhance the implicit alignment and feed to Contrastive learning.
- Zihan WangPeiyi WangLianzhe HuangXin SunHoufeng Wang
- 2022
Computer Science
ACL
Hierarchical text classification is a challenging subtask of multi-label classification due to its complex label hierarchy and Hierarchy-guided Contrastive Learning (HGCLR) is proposed to directly embed the hierarchy into a text encoder to dispense with the redundant hierarchy.
- 58 [PDF]
...
...
Related Papers
Showing 1 through 3 of 0 Related Papers