[PDF] HC2L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding. | Semantic Scholar (2024)

Skip to search formSkip to main contentSkip to account menu

Semantic ScholarSemantic Scholar's Logo
@article{Xing2024HC2LHA, title={HC2L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding.}, author={Bowen Xing and Ivor Tsang}, journal={IEEE transactions on pattern analysis and machine intelligence}, year={2024}, volume={PP}, url={https://api.semanticscholar.org/CorpusID:269740862}}
  • Bowen Xing, Ivor Tsang
  • Published in IEEE Transactions on Pattern… 10 May 2024
  • Computer Science, Linguistics

This paper proposes Hybrid and Cooperative Contrastive Learning, a holistic approach that exploits source language supervised contrastive learning, cross-lingual supervised contrastive learning and multilingual supervised contrastive learning to perform label-aware semantics alignments in a comprehensive manner.

Figures and Tables from this paper

  • figure 1
  • table 1
  • figure 2
  • table 2
  • figure 3
  • table 3
  • figure 4
  • table 4
  • figure 5
  • table 5
  • figure 6
  • figure 7
  • figure 8
  • figure 9
  • figure 10
  • figure 11

Ask This Paper

BETA

AI-Powered

Our system tries to constrain to information found in this paper. Results quality may vary. Learn more about how we generate these answers.

Feedback?

44 References

GL-CLeF: A Global–Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding
    Libo QinQiguang Chen MingSung Kan

    Computer Science, Linguistics

    ACL

  • 2022

Global-Local Contrastive Learning Framework (GL-CLeF) employs contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourages their representations to be more similar than negative example pairs, which achieves to explicitly align representations of similar sentences across languages.

  • 22
  • Highly Influential
  • [PDF]
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    Jacob DevlinMing-Wei ChangKenton LeeKristina Toutanova

    Computer Science

    NAACL

  • 2019

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

  • 74,467
  • Highly Influential
  • [PDF]
CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP
    Libo QinMinheng NiYue ZhangWanxiang Che

    Computer Science

    IJCAI

  • 2020

A data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information.

  • 126
  • Highly Influential
  • [PDF]
DiffSLU: Knowledge Distillation Based Diffusion Model for Cross-Lingual Spoken Language Understanding
    Tianjun MaoChenghong Zhang

    Computer Science, Linguistics

    INTERSPEECH 2023

  • 2023

This paper proposes a novel cross-lingual SLU framework termed DiffSLU, which leverages powerful diffusion model to enhance the mutual guidance and also utilizes knowledge distillation to facilitate knowledge transfer.

  • 2
  • PDF
FC-MTLF: A Fine- and Coarse-grained Multi-Task Learning Framework for Cross-Lingual Spoken Language Understanding
    Xuxin ChengWanshi Xu Yuexian Zou

    Computer Science, Linguistics

    INTERSPEECH 2023

  • 2023

A novel framework termed FC-MTLF is proposed, which applies a multi-task learning by introducing an auxiliary multilingual neural machine translation (NMT) task to compensate for the short-comings of code-switching.

  • 7
  • PDF
Relational Temporal Graph Reasoning for Dual-Task Dialogue Language Understanding
    Bowen XingI. Tsang

    Computer Science

    IEEE Transactions on Pattern Analysis and Machine…

  • 2023

A speaker-aware temporal graph (SATG) and a dual-task relational temporalGraph (DRTG) is proposed to facilitate relational temporal modeling in dialog understanding and dual- task reasoning and outperform state-of-the-art models by a large margin.

Group is better than individual: Exploiting Label Topologies and Label Relations for Joint Multiple Intent Detection and Slot Filling
    Bowen XingI. Tsang

    Computer Science

    EMNLP

  • 2022

A novel model termed ReLa-Net, which surpasses the previous best model by over 20% in terms of overall accuracy on MixATIS dataset and also proposes the label-aware inter-dependent decoding mechanism to further exploit the label correlations for decoding.

Co-guiding Net: Achieving Mutual Guidances between Multiple Intent Detection and Slot Filling via Heterogeneous Semantics-Label Graphs
    Bowen XingI. Tsang

    Computer Science

    EMNLP

  • 2022

A novel model termed Co-guiding Net is proposed, which implements a two-stage framework achieving the mutual guidances between the two tasks, and outperforms existing models by a large margin.

Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
    Shining LiangLinjun Shou Daxin Jiang

    Computer Science, Linguistics

    EMNLP

  • 2022

This paper proposes to model the utterance-slot-word structure by a multi-level contrastive learning framework at the utterances, slot and word levels to facilitate explicit alignment and develops a label-aware joint model leveraging label semantics to enhance the implicit alignment and feed to Contrastive learning.

Incorporating Hierarchy into Text Encoder: a Contrastive Learning Approach for Hierarchical Text Classification
    Zihan WangPeiyi WangLianzhe HuangXin SunHoufeng Wang

    Computer Science

    ACL

  • 2022

Hierarchical text classification is a challenging subtask of multi-label classification due to its complex label hierarchy and Hierarchy-guided Contrastive Learning (HGCLR) is proposed to directly embed the hierarchy into a text encoder to dispense with the redundant hierarchy.

...

...

Related Papers

Showing 1 through 3 of 0 Related Papers

    [PDF] HC2L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding. | Semantic Scholar (2024)

    References

    Top Articles
    Latest Posts
    Article information

    Author: Mrs. Angelic Larkin

    Last Updated:

    Views: 5814

    Rating: 4.7 / 5 (67 voted)

    Reviews: 90% of readers found this page helpful

    Author information

    Name: Mrs. Angelic Larkin

    Birthday: 1992-06-28

    Address: Apt. 413 8275 Mueller Overpass, South Magnolia, IA 99527-6023

    Phone: +6824704719725

    Job: District Real-Estate Facilitator

    Hobby: Letterboxing, Vacation, Poi, Homebrewing, Mountain biking, Slacklining, Cabaret

    Introduction: My name is Mrs. Angelic Larkin, I am a cute, charming, funny, determined, inexpensive, joyous, cheerful person who loves writing and wants to share my knowledge and understanding with you.