< Academy

Large Context Models: Are They the Beginning of the End for Traditional LLMs?

Dr Nadine Kroher
Chief Scientific Officer

In the world of AI, the way we understand language is rapidly evolving.

Traditional large language models (e.g. ChatGPT or Claude) work by slicing up text into tokens. These can be words, subwords or even individual characters. Each one is looked up in a dictionary before being passed through the model.

It’s a bit like trying to read a novel by looking up every word in a glossary. It works, but it’s not how you or I experience language. Humans don’t think in tokens. We think in concepts. When you hear “a bustling café in Paris,” your mind doesn’t separate that sentence into chunks. It immediately builds a rich, coherent scene, scooters flying past, espresso cups clinking, a waiter in a hurry. That’s the gap between today’s language models and how we process meaning. But a new approach might help close it.

Enter Large Concept Models (LCMs)- a fresh architecture proposed by researchers at Meta [1] that shifts away from token-level processing and toward something closer to human cognition: concept-level reasoning.

Instead of translating text into tokens, LCMs break it down into semantic concepts, each encoded as a dense vector (what we call an embedding). These vectors capture meaning in high-dimensional space.

Once you’ve got concepts, you can run reasoning on top of them using the same transformer architecture we know from LLMs, but now the model is operating at a higher level of abstraction.

It’s not just reading anymore. It’s thinking.

This shift opens the door to models that might reason more coherently, generalize better across languages, and feel more intuitive when applied in real-world contexts.

If you're following how AI is changing the way we think, learn, and build, this is one to keep an eye on, Tom, CEO of Passion Labs

We’re still early, but the direction is clear: the future of AI will be built not on stringing words together, but on understanding what they mean.

Reference

[1] Barrault, Loïc, et al. "Large Concept Models: Language Modeling in a Sentence Representation Space." arXiv preprint arXiv:2412.08821 (2024).

< back to academy
< previous
Next >