Publications
(* indicates equal contribution, and ๐งย indicates corresponding author)
- Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation
Yi Lu, Wanxu Zhao, Xin Zhou, Chenxin An, Chenglong Wang, Shuo Li, Yuming Yang, Jun Zhao, Tao Ji๐ง, Tao Gui๐ง, Qi Zhang๐ง, Xuanjing Huang
In arXiv 2025. [paper]
- Mitigating Object Hallucinations in MLLMs via Multi-Frequency Perturbations
Shuo Li*, Jiajun Sun*, Guodong Zheng, Xiaoran Fan, Yujiong Shen, Yi Lu, Zhiheng Xi, Yuming Yang, Wenming Tan, Tao Ji๐ง, Tao Gui๐ง, Qi Zhang, Xuanjing Huang
In arXiv 2025. [paper]
- The Role of Visual Modality in Multimodal Mathematical Reasoning: Challenges and Insights
Yufang Liu*, Yao Du*, Tao Ji๐ง, Jianing Wang, Yang Liu, Yuanbin Wu๐ง, Aimin Zhou๐ง, Mengdi Zhang, Xunliang Cai
In ACL 2025. [paper]
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
Tao Ji, Bin Guo, Yuanbin Wu, Qipeng Guo, Lixing Shen, Zhan Chen, Xipeng Qiu, Qi Zhang, Tao Gui๐ง
In ACL 2025. [paper]
- AntLM: Bridging Causal and Masked Language Models
Xinru Yu*, Bin Guo*, Shiwei Luo*, Jie Wang, Tao Ji๐ง, Yuanbin Wu๐ง
In BabyLM Shared Task 2024. [paper]
- Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs
Shuo Li*, Tao Ji*, Xiaoran Fan*, Linsheng Lu, Leyi Yang, Yuming Yang, Zhiheng Xi, Rui Zheng, Yuran Wang, Xiaohui Zhao, Tao Gui, Qi Zhang, Xuanjing Huang
In ICLR 2025. [paper]
- Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
Yufang Liu*, Tao Ji*, Changzhi Sun, Yuanbin Wu, Aimin Zhou
- Generation with Dynamic Vocabulary
Yanting Liu, Tao Ji๐ง, Changzhi Sun, Yuanbin Wu๐ง, Xiaoling Wang๐ง
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor
Yi Lu, Xin Zhou, Wei He, Jun Zhao, Tao Ji๐ง, Tao Gui๐ง, Qi Zhang๐ง, Xuanjing Huang
- Poly-Visual-Expert Vision-Language Models
Xiaoran Fan*, Tao Ji*, Changhao Jiang*, Shuo Li*, Senjie Jin*, Sirui Song, Junke Wang, Boyang Hong, Lu Chen, Guodong Zheng, Ming Zhang, Caishuang Huang, Rui Zheng, Zhiheng Xi, Yuhao Zhou, Shihan Dou, Junjie Ye, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang
In COLM 2024. [paper]
- Length Generalization of Causal Transformers without Position Encoding
Jie Wang*, Tao Ji*, Yuanbin Wu, Hang Yan, Tao Gui, Qi Zhang, Xuanjing Huang, Xiaoling Wang
- Rehearsal-free Continual Language Learning via Efficient Parameter Isolation
Zhicheng Wang, Yufang Liu, Tao Ji, Xiaoling Wang, Yuanbin Wu, Congcong Jiang, Ye Chao, Zhencong Han, Ling Wang, Xu Shao, Wenqiu Zeng
- Typology Guided Multilingual Position Representations: Case on Dependency Parsing
Tao Ji, Yuanbin Wu, Xiaoling Wang
- Zero-Shot Event Detection Based on Ordered Contrastive Learning and Prompt-Based Prediction
Senhui Zhang,ย Tao Ji,ย Wendi Ji, Xiaoling Wang
- Explore Unsupervised Structures in Pretrained Models for Relation Extraction
Xi Yang*, Tao Ji*, Yuanbin Wu
- Word Reordering for Zero-shot Cross-lingual Structured Prediction
Tao Ji, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu and Xiaoling Wang
- A Unified Encoding of Structures in Transition Systems
Tao Ji, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu and Xiaoling Wang
- Graph-based Dependency Parsing with Graph Neural Networks
Tao Ji, Yuanbin Wu, Man Lan
- AntNLP at CoNLL 2018 Shared Task: A Graph-based Parser for Universal Dependency Parsing
Tao Ji, Yufang Liu, Yijun Wang, Yuanbin Wu, Man Lan
- ECNU at EPE 2017: Universal Dependencies Representations Parser
Tao Ji, Yuekun Yao, Qi Zheng, Yuanbin Wu, Man Lan