TAOLIN
profile photo

Taolin Zhang

Motto: 路漫漫其修远兮,吾将上下而求索 (The way was long and wrapped in gloom did seem, as I urged on to seek my vanished dream.)

Hi THERE😄 Currently, I am an algorithm researcher in Alibaba Artificial Intelligence Governance Laboratory (i.e., AAIG). I have obtained my Ph.D degree from East China Normal University in 07/2023, supervised by Prof. Xiaofeng He and Alibaba Algorithm Expert Chengyu Wang. I have broad interests in natural language processing and machine learning, with a focus on knowledge-enhanced neural models and their applications in information extraction and question answering. I did my bachelors at school of computer science, Shanxi University, advised by Prof. Hongye Tan.

Email  /  Google Scholar  /  Zhi Hu  /  DBLP

News

  • 9/2024, 1 paper have been accepted by EMNLP 2024.
  • 7/2024, 1 paper have been accepted by ECAI 2024.
  • Work Experience

  • July 2023-Current, Senior Algorithm Engineer at Alibaba Group
  • March 2021-July 2023, Research Intern at Alibaba Group
  • July 2020-Sept 2020, Research Intern at iFLYTEK
  • Recent Projects

  • EasyNLP: a Comprehensive and Easy-to-use NLP Toolkit
  • Project Page

    EasyNLP is an easy-to-use NLP development and application toolkit in PyTorch, first released inside Alibaba in 2021. It is built with scalable distributed training strategies and supports a comprehensive suite of NLP algorithms for various NLP applications. EasyNLP integrates knowledge distillation and few-shot learning for landing large pre-trained models, together with various popular multi-modality pre-trained models. It provides a unified framework of model training, inference, and deployment for real-world applications. It has powered more than 10 BUs and more than 20 business scenarios within the Alibaba group. It is seamlessly integrated to Platform of AI (PAI) products, including PAI-DSW for development, PAI-DLC for cloud-native training, PAI-EAS for serving, and PAI-Designer for zero-code model training.

  • HugNLP: A Unified and Comprehensive Library for Natural Language Processing
  • Project Page

    In HugNLP, we provide some popular transformer-based models as backbones, such as BERT, RoBERTa, GPT-2, etc. We also release our pre-built KP-PLM, a novel knowledge-enhanced pre-training paradigm to inject factual knowledge and can be easily used for arbitrary PLMs. Apart from basic PLMs, we also implement some task-specific models, involving sequence classification, matching, labeling, span extraction, multi-choice, and text generation. Notably, we develop standard fine-tuning (based on CLS Head and prompt-tuning models that enable PLM tuning on classification tasks. For few-shot learning settings, HugNLP provides a prototypical network in both few-shot text classification and named entity recognition (NER).

    Research

    I'm interested in devleoping knowledge-enhanced neural models for natural language processing (e.g. pre-trained language models, information extraction, and question answering).

    Conference Papers (*: equal contribution)

    1. Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning
      Qizhou Chen*, Taolin Zhang*, Xiaofeng He, Dongyang Li, Chengyu Wang, Longtao Huang and Hui Xue
      EMNLP 2024 | paper

    2. R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models
      Taolin Zhang*, Dongyang Li*, Qizhou Chen, Chengyu Wang, Longtao Huang, Hui Xue, Xiaofeng He and Jun Huang
      ECAI 2024 | paper

    3. DAFNet: Dynamic Auxiliary Fusion for Sequential Model Editing in Large Language Models
      Taolin Zhang*, Qizhou Chen*, Dongyang Li, Chengyu Wang, Xiaofeng He, Longtao Huang, Hui Xue and Jun Huang
      ACL findings 2024 | paper

    4. On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models
      Dongyang Li*, Junbing Yan*, Taolin Zhang*, Chengyu Wang, Xiaofeng He, Longtao Huang, Hui Xue and Jun Huang
      ACL 2024 | paper

    5. KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning
      Dongyang Li*, Taolin Zhang*, Longtao Huang, Chengyu Wang, XIAOFENG HE and Hui Xue
      COLING 2024 | paper

    6. UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-LDingual Natural Language Understanding
      Dongyang Li*, Taolin Zhang*, Jiali Deng, Longtao Huang, Chengyu Wang, XIAOFENG HE and Hui Xue
      COLING 2024 | paper

    7. TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
      Junbing Yan, Chengyu Wang, Taolin Zhang, XIAOFENG HE, jun huang, Wei Zhang, Longtao Huang and hui xue
      COLING 2024 | paper

    8. CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem
      Qian Chen, Taolin Zhang, Dongyang Li, Xiaofeng He
      AAAI 2024 | paper

    9. Learning Knowledge-Enhanced Contextual Language Representations for Domain Natural Language Understanding
      Taolin Zhang, Ruyao Xu, Chengyu Wang, Zhongjie Duan, Cen Chen, Minghui Qiu, Dawei Cheng, Xiaofeng He, Weining Qian
      EMNLP 2023 | paper

    10. From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models
      Yan Junbing, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei Zhang
      EMNLP 2023 | paper

    11. OnMKD: An Online Mutual Knowledge Distillation Framework for Passage Retrieval
      Jiali Deng, Dongyang Li, Taolin Zhang, Xiaofeng He
      NLPCC 2023 | paper

    12. Knowledge-Enhanced Prototypical Network with Structural Semantics for Few-Shot Relation Classification
      Yanghu Li, Taolin Zhang, Dongyang Li, Xiaofeng He
      PAKDD 2023 | paper

    13. Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training
      Taolin Zhang, Junwei Dong, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He
      EMNLP 2022 | paper

    14. EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
      Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, Wei Lin
      EMNLP 2022 | paper

    15. HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction
      Li Dongyang*, Taolin Zhang*, Nan Hu, Chengyu Wang, Xiaofeng He
      ACL findings 2022 | paper

    16. DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
      Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang
      AAAI 2022 | paper

    17. HfGCN: Hierarchical fused GCN for Joint Entity and Relation Extraction
      Wei Nong, Taolin Zhang, Shuangji Yang, Nan Hu, Xiaofeng He
      ICBK 2021 | paper

    18. HORNET: Enriching Pre-trained Language Representations with Heterogeneous Knowledge Sources
      Taolin Zhang, Zerui Cai, Chengyu Wang, Peng Li, Yang Li, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang
      CIKM 2021 | paper

    19. HAIN: Hierarchical Aggregation and Inference Network for Document-Level Relation Extraction
      Nan Hu, Taolin Zhang, Shuangji Yang, Wei Nong, Xiaofeng He
      NLPCC 2021 | paper

    20. EMBERT: A Pre-trained Language Model for Chinese Medical Text Mining
      Zerui Cai, Taolin Zhang, Chengyu Wang, Xiaofeng He
      APWEB 2021 | paper

    21. SaGCN: Structure-Aware Graph Convolution Network for Document-Level Relation Extraction
      Shuangji Yang, Taolin Zhang, Danning Su, Nan Hu, Wei Nong, Xiaofeng He
      PAKDD 2021 | paper

    22. SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining
      Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He
      ACL 2021 | paper

    23. Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources
      Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang
      ACL findings 2021 | paper

    Journal Papers:

    Services

  • Program Committee (PC) Members: CIKM2020, CIKM2021, AAAI2021, ICDM2022, AAAI2023, AAAI2024, ECAI2024, ICLR2025, ACL Rolling Review (for ACL, EMNLP, NAACL).

  • Area Chair: EMNLP2023

  • Awards

  • 2022, National Scholarship

  • 2022, The 8th place in the Chinese Language Understanding Evaluation Benchmark (CLUE) Contest. (8/2000+)

  • 2021, Alibaba Outstanding Research Intern

  • 2021, The 1st place in the Knowledge Probing of Pre-trained Language Model Contest. (1/256)

  • 2021, The 1st place in the FewCLUE Contest

  • 2018, Shanxi Unversity Outstanding Graduate

  • Welcome to use this website's source code, just add a link back to here.
    No. Visitor Since Feb 2022. Powered by w3.css