Core Research Team

The DY-LLM team consists of interdisciplinary experts dedicated to building and optimizing large-scale generative AI models. Our team includes natural language processing specialists, machine learning engineers, data scientists, and algorithm researchers, committed to applying cutting-edge technologies to real-world scenarios like text generation and image synthesis.

Team Member 1

Chuanyou Li (Team Lead)

Southeast University · PhD/Professor

Chief Scientist

Research Focus: Artificial Intelligence & Machine Learning

Key Contributions: Published numerous papers in top-tier journals, led fundamental algorithm research for AI large models, specializing in AIGC, high-performance computing acceleration, combinatorial optimization, and LLM system architecture.

Team Member 2

Kehua Chen

University of Electronic Science and Technology of China · PhD

AI Agent Development Engineer

Research Focus: Government Information Systems

Key Contributions: Contributed to national key research projects, driving AI innovation in government system optimization and implementation.

Team Member 3

Guoxiong Tang

University of Pennsylvania · PhD

Deep Learning Engineer

Research Focus: Machine Learning

Key Contributions: Developed high-efficiency machine learning models and algorithms that significantly improved computational efficiency and accuracy in complex data processing.

Team Member 4

Wenxin Zhang

University of Rennes · PhD

LLM Product Director

Research Focus: Modern Management Theory

Key Contributions: Pioneered streaming engineering design management methodologies that became driving forces for regional economic development.

Team Member 5

Ningyu Liang

University of Glasgow · MSc

Data Analysis Engineer

Research Focus: Data Mining

Key Contributions: Developed a high-performance data processing system for efficient handling of large-scale heterogeneous data sources.

Team Member 6

Sida Xing

Deakin University · MSc

LLM Algorithm Engineer

Research Focus: Recurrent Neural Networks (RNN)

Key Contributions: Developed Enhanced LSTM (E-LSTM) to address gradient issues in traditional RNNs, improving training efficiency across multiple domains.

Supported Countries & Regions
  • China
  • United States
  • Canada
  • Australia
  • Mexico
  • Colombia
  • United Kingdom
  • France
  • Germany
  • Spain
  • Greece
  • Portugal
  • Netherlands
  • Russia
  • Italy
  • Indonesia
  • Saudi Arabia
  • Turkey
  • Japan
  • Thailand
  • Vietnam
  • Singapore
  • Malaysia
  • Philippines