Skip to content
Go back

AlphaFold doesn’t predict words, it predicts shapes

AlphaFold is not an LLM

At the intersection of biology and artificial intelligence, it is common to confuse AlphaFold with an LLM.
But it isn’t.


Why the confusion?

It is true that AlphaFold uses transformers, the same architecture behind LLMs. But sharing components does not mean sharing purpose.
Saying AlphaFold is an LLM is like saying a satellite and a microwave are the same because both use electromagnetic waves.


What does AlphaFold actually do?

Imagine you have a long necklace of beads. At first glance, you only see the stretched chain, but you don’t know how it will fold or what shape it will take when, for example, you drop it on the table.

In biology, the same thing happens: a protein starts as a linear sequence of amino acids, but its true function depends on the three-dimensional shape it folds into. That shape can turn it into an enzyme that digests food, hemoglobin that carries oxygen, or an antibody that protects you from viruses.

What AlphaFold does is predict how that necklace will fold:

In short: AlphaFold does not write sentences, it solves the three-dimensional puzzle of biology.


What AlphaFold really does

All of this is about molecular geometry, not language.


How it differs from an LLM

Even so-called protein language models (like ESMFold, ProGen, or Evo2) are not “linguistic” in the human sense. As Offert, Kim & Cai (2024) note, transformers operate on artificially tokenized sequences (amino acids, subwords, pixels…), not on natural human language.


References


Share this post on:

Next Post
Why AI Fails at Invoices but Excels in Surgery