Large Language Models
A topic-organized track. Seven sub-sections move from foundational ideas to current research.
Reading path
- LLM Basics — word embeddings → the Transformer → pre-training → scaling laws → instruction tuning.
- Reasoning & Post-training — chain-of-thought, latent-space reasoning, RLHF, RLVR.
- Efficient Methods — parameter-efficient fine-tuning, efficient RLVR, efficient inference, long-context.
- Factuality — hallucination and calibration.
- Applications — RAG, agents, agentic RAG, multi-modal LLMs.
- Evaluation — evaluating LLMs and detecting LLM-generated text.
- Other Topics — alternative architectures (MoE / SSM / RWKV), bias, safety.
For a year-by-year view of the same models, see The Transformer Era →.
Each topic page covers
A short introduction followed by the canonical paper list for the topic. Where a paper has a name (LoRA, DPO, Medusa, …) it's used as the heading; otherwise the full title is listed with its venue and year.