SharedLLM research
A public research repository for SharedLLM — community-owned distributed LLM inference and a federation of centrally-trained specialists. Each entry is a falsifiable bet with explicit strict / lenient / catastrophic criteria, written before the experiment runs.
We publish negative controls and honest retractions alongside wins. The point is not to look good; it is to know what survives scrutiny.
Start Here
What SharedLLM is, why federation, and how to read the bets harness as research methodology rather than as marketing.
Foundations
Eight bets that wired the federation primitives end-to-end and validated the basic protocol surface before any speculative work.
Contrarian Bets
Ten bets that ran against the flow of received wisdom: per-person specialists, pay-with-bandwidth, gossip directories, glass-box logging.
Per-User Adapters
The federation primitive that emerged as load-bearing. Norm-only fine-tune at 9 KB beats LoRA-r4 (96 KB) and full fine-tune (155 MB). Replicated 15/15 across seeds × eval texts.
Federated Training
DiLoCo K-step async training, byzantine-robust aggregation, throttle-invariance — the primitives that let consumer hardware contribute.
Production Wire Format
Composing the validated primitives. Ternary base + norm-only adapter at 7 MB + 9 KB per user. What composes, what doesn't.
Validity Controls
The negative controls and disambiguators that distinguish real signal from regularization noise. The most important section in this whole document.
Honest Falsifications
Bets that didn't survive. We retire them publicly because retraction is what makes the wins credible.
Big Bets — Operating Layer
Five bets that target the niche the open-AI ecosystem has not filled: federation, attribution, royalties, sovereignty. Open weights are commoditised; the operating layer above weights is where the genuinely uncovered work lives.
Open Questions
What we haven't answered yet. The bets harness can't reach these on its own; they need real-WAN deployment, larger model scales, or institutional partnerships.