< Academy

How Machines Learn (and Why Most Don’t)

Research
Dr Nadine Kroher
Chief Scientific Officer

How Machines Learn (and Why Most Don’t)

Webinar delivered 29/10/2025 by Dr. Nadine Kroher, Chief Scientific Officer at Passion Labs

Artificial Intelligence has been around far longer than the current hype might suggest. The idea of “intelligent machines” dates back to the 1950s, but what we now call AI or machine learning (ML) has evolved into something far more precise — and, at the same time, far more misunderstood.

From Rules to Learning

Early “AI systems” weren’t really intelligent at all. They were rule-based programs (long lists of if–then statements that told computers exactly what to do. True machine learning began when we stopped telling computers what to do and instead taught them to learn from data.

A neural network, the building block of modern AI, is essentially a mathematical structure that adjusts itself until it can tell one thing from another : whether that’s a cat from a dog or spam from genuine email. Nadine illustrated that machines learn by drawing invisible “lines” that separate categories, adjusting those lines over and over until they get them right. That process — “analyse the error, adjust, repeat” — is the essence of learning.

Large Language Models: Predicting the Next Word

So how do systems like ChatGPT learn? The answer: through something deceptively simple: next-token prediction.

Instead of learning to classify images or detect fraud, Large Language Models (LLMs) learn to guess the next word in a sentence. Given “Nadine needs a...”, the model might predict “pizza”. Then, if that’s correct, it adjusts its internal parameters to get better next time.

This training happens on massive amounts of data. GPT-3, for instance, was trained on about 570 GB of text from the internet. Through this process, LLMs learn syntax, semantics and even bits of factual knowledge. Fine-tuning and reinforcement learning with human feedback (RLHF) then shape them into tools that can follow instructions, answer questions, or write code.

You (Probably) Don’t Need to Train a Model

A key takeaway from the session: most organisations don’t need to train their own models. Training a model like GPT-3 from scratch would take 355 years and cost over $4.6 million on a single GPU! Not to mention the vast expertise required.

Instead, Nadine outlined three realistic options for companies:

  1. Train from scratch – prohibitively expensive and almost never necessary.
  2. Fine-tune an existing model – useful for highly specialised domains but still costly and complex.
  3. Customise with smarter techniques – using retrieval-augmented generation (RAG), in-context learning, or multi-agent frameworks. These approaches are far more efficient, flexible, and often just as powerful.

AI Isn’t Magic, It’s Math

The closing message was simple but important:

AI isn’t magic. It’s mainly math. You don’t need to train an LLM and when someone says they trained their own model, they probably didn’t.” — Dr. Nadine Kroher

Passion Labs’ research philosophy is rooted in this truth: great AI is built with heart by humans.

< back to academy
< previous
Next >