A research project
Today's biggest AI models live on enormous computers. We're studying their structure to find which parts can move to ordinary hardware — without changing what the model can do.
A modern AI model is one tightly-connected program: to use one, you load every part of it onto the same machine. That's why the most capable models live in data centers. We've been asking an obvious question: does it have to be this way?
What we've found is that the answer is "not entirely". A trained model has natural seams where its work changes character, and the pieces on either side of a seam can live on different computers and still produce the same answers as the original. Inside each piece, some parts of the calculation turn out to have a much simpler equivalent — a quick lookup that does the same job — while other parts genuinely need their full complexity. Mapping which is which, layer by layer, is the central thing we do here.