AI Research

Explore cutting-edge research in language models and discover our latest breakthroughs in large model training, optimization, and applications. Our research advances the boundaries of artificial intelligence technology.

2025

May 2025

Efficient Fine-Tuning of Large Language Models

Chuanyou Li, Chengyou Xin, Wenxin Zhang, Yu Chen

We propose a new parameter-efficient fine-tuning method that preserves pre-trained knowledge while adapting to new tasks by updating only a small number of parameters. This method achieves performance comparable to full-parameter fine-tuning on multiple benchmarks.

Model Optimization Parameter Efficiency Transfer Learning
April 2025

Long-Context Language Model Research

Kehua Chen, Chengyou Xin, Wenxin Zhang

We developed a model architecture capable of processing ultra-long contexts (128K tokens), addressing the limitations of existing models in long-document understanding. This model excels in long-document summarization and question-answering tasks.

Model Architecture Long Context Attention Mechanism
February 2025

Large Model Safety Alignment Research

Chengyou Xin, Guoxiong Tang, Wenxin Zhang, Sida Xing

We propose a new alignment technique that significantly reduces the probability of models generating harmful content through multi-stage training and reinforcement learning constraints, while maintaining model creativity and utility.

AI Safety Alignment Reinforcement Learning

2024

October 2024

Multilingual Large Model Pretraining Research

Ningyu Liang, Chuanyou Li, Na Li

We built a large-scale pretrained model supporting 50+ languages. Through an innovative language representation sharing mechanism, we significantly improved performance for low-resource languages while maintaining quality for high-resource languages.

Multilingual Model Pretraining Cross-Language Learning
May 2024

Large Model Inference Optimization

Fan Yang, Lei Xu, Wenxin Zhang, Chengyou Xin

We developed a new inference optimization framework that improves the inference speed of large language models by over 3x through dynamic computation allocation and caching strategies, while maintaining output quality.

Inference Optimization Computational Efficiency System Optimization
January 2024

Large Model Knowledge Editing

Chengyou Xin, Hui Liu, Wei Zhang

We propose a method to update model knowledge without retraining, allowing precise modification of specific knowledge in the model without affecting other capabilities, addressing a key challenge in large model knowledge updates.

Knowledge Editing Model Maintenance Continual Learning