Skip to content

Xiao9905/Xiao9905

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

49 Commits
Β 
Β 

Repository files navigation

Hi, welcome to my Github πŸ‘‹

I am Xiao Liu, a third-year PhD student in Tsinghua University since 2021.

  • πŸ”­ Interested in Machine Learning, Data Mining, NLP and Knowledge Graph.

  • 🌱 Find my up-to-date publication list in Google Scholar! Some of my proud leading works:

    Large Language Model (LLM) Training and Prompt Learning
    • P-tuning and P-tuning v2 (ACL'22): pioneer works on prompt tuning
    • GLM-130B (ICLR'23): an open bilingual (Enligsh & Chinese) pre-trained model with 130 billion parameters based on GLM (ACL'22); better than GPT-3 175B on LAMBADA and MMLU.
    • ChatGLM-6B & ChatGLM2-6B & ChatGLM3-6B: an open bilingual dialogue language model that requires only 6GB to run. Receiving GitHub stars, GitHub stars, and GitHub starsGitHub Stars!
    • WebGLM (KDD'23): an efficient web-enhanced question answering system based on GLM-10B, outperforming WebGPT-13B and approaching WebGPT-175B performance in human evaluation.
    • ChatGLM-Math: employing self-critique with RFT and DPO to enable SOTA mathematical capabilities wihtouth compromising language abilities.
    Foundational Agents For Real-world Challenging Missions
    • AgentBench (ICLR'24): the first systematic multi-dimensional benchmark to evaluate LLMs as Agents in 8 distinct environments deriving from real-world practical missions. Find LLM-as-Agent demos at llmbench.ai/agent!
    Alignment and Scalable Oversights over LLMs and Diffusers
    • ImageReward (NeurIPS'23): the first general-purpose text-to-image human preference reward model (RM) for RLHF, outperforming CLIP/BLIP/Aesthetic by 30% in terms of human preference prediction.
    • BPO (Black-box Prompt Optimization): a novel direction to align LLMs via preference-aware prompt optimization. Improving ChatGPT, Claude, LLaMA on human preference's win rates by 20%+ without training them.
    • AlignBench: the first comprehensive benchmark on evaluating LLMs' Chinese alignment, deriving from ChatGLM's online real scenarios. Submit your LLMs to acquire CritiqueLLM's judgement on AlignBench on llmbench.ai/align!
    • CritiqueLLM: scaling LLM-as-Critic for scalable oversights on LLM alignment. A series of strong critqiue LLMs ranging from 6B to 66B.
    Self-supervised Learning and Reasoning
  • πŸ€” Dedicated to building next-generation of AI systems via both Large Pre-trained Model and Symbolic Agent Reasoning.

  • πŸ’¬ Feel free to drop me an email for:

    • Any form of collaboration
    • Any issue about my works or code
    • Interesting ideas to discuss or just chatting

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published