AI: A Living Reference
A research-oriented, continuously-updated reference for modern AI.
This book is a working reference on the state of artificial intelligence as of 2026 - what the dominant model families are, how they are trained, how they are deployed, and what is still open. It is organised around the paradigms that have come to define the field over the last decade: foundation models, large language models, generative modelling, reinforcement learning, agentic systems, alignment, interpretability, causality.
The audience is graduate students and research practitioners. We assume undergraduate machine learning, linear algebra, multivariable calculus, and probability; we do not assume familiarity with current frontier-model literature - we cite and explain it.
This is a living reference. The field moves faster than print can keep up with; chapters are revised in place as methods stabilise or shift, and new chapters appear as new threads coalesce. Each chapter carries its own changelog; the project as a whole is versioned via dated releases. The references list (≈500 verified citations across the chapters) is part of that revision pipeline.
Table of contents
Part I: Learning Foundations
The substrate. How modern deep networks are built and trained, the self-supervised objectives that unlocked foundation-scale learning, reinforcement learning as both a classical paradigm and the engine behind preference alignment, and the theoretical lenses (PAC, PAC-Bayes, information-theoretic, the deep-learning generalisation puzzle) that try to explain why any of it works.
Part II: Foundation Models
The organising concept of modern AI: large pre-trained models that serve as a substrate for many downstream tasks. Foundation models in general, large language models specifically, the unified generative-modelling toolkit (diffusion, flow matching, autoregressive, VAEs), and the multimodal extension into vision, audio, and beyond.
Part III: Behaviours, Interpretation, Alignment
What sits on top of the substrate. AI systems that act autonomously through tool use, reasoning models with explicit deliberation, retrieval-augmented generation as a context-window-bypass, mechanistic interpretability as a maturing subfield, the modern alignment programme (RLHF, RLAIF, scalable oversight, debate, safety evaluations), and the evaluation methodology that ties it all together.
Part IV: Connections and Systems
AI's connections to adjacent fields and to the engineering that makes frontier-scale systems possible. Causal inference as a parallel critique of correlation-based ML, AI for science across protein folding, materials, mathematics, and climate, robotics as foundation-model deployment in the physical world, and the distributed-training engineering that makes any of the above tractable at scale.