How Machines Learn (and Why Most Don’t)




Webinar delivered 29/10/2025 by Dr. Nadine Kroher, Chief Scientific Officer at Passion Labs
Artificial Intelligence has been around far longer than the current hype might suggest. The idea of “intelligent machines” dates back to the 1950s, but what we now call AI or machine learning (ML) has evolved into something far more precise — and, at the same time, far more misunderstood.
Early “AI systems” weren’t really intelligent at all. They were rule-based programs (long lists of if–then statements that told computers exactly what to do. True machine learning began when we stopped telling computers what to do and instead taught them to learn from data.
A neural network, the building block of modern AI, is essentially a mathematical structure that adjusts itself until it can tell one thing from another : whether that’s a cat from a dog or spam from genuine email. Nadine illustrated that machines learn by drawing invisible “lines” that separate categories, adjusting those lines over and over until they get them right. That process — “analyse the error, adjust, repeat” — is the essence of learning.
So how do systems like ChatGPT learn? The answer: through something deceptively simple: next-token prediction.
Instead of learning to classify images or detect fraud, Large Language Models (LLMs) learn to guess the next word in a sentence. Given “Nadine needs a...”, the model might predict “pizza”. Then, if that’s correct, it adjusts its internal parameters to get better next time.
This training happens on massive amounts of data. GPT-3, for instance, was trained on about 570 GB of text from the internet. Through this process, LLMs learn syntax, semantics and even bits of factual knowledge. Fine-tuning and reinforcement learning with human feedback (RLHF) then shape them into tools that can follow instructions, answer questions, or write code.
A key takeaway from the session: most organisations don’t need to train their own models. Training a model like GPT-3 from scratch would take 355 years and cost over $4.6 million on a single GPU! Not to mention the vast expertise required.
Instead, Nadine outlined three realistic options for companies:
The closing message was simple but important:
AI isn’t magic. It’s mainly math. You don’t need to train an LLM and when someone says they trained their own model, they probably didn’t.” — Dr. Nadine Kroher
Passion Labs’ research philosophy is rooted in this truth: great AI is built with heart by humans.