Get involved

SharedLLM is open research and open code. The bets harness, the coordinator, the node daemon, and the wire-format work all live in one repository. You can run a node, train a specialist, propose a new bet, or submit a hardened replacement for an existing one. Below are the four most useful ways to contribute.

Run a node

A node registers with a coordinator and either serves inference (primary) or accepts offloaded layers (worker). The minimum on a laptop:

# install
pip install -e .

# coordinator (one machine on the LAN)
sharedllm coordinator --host 0.0.0.0 --port 8420

# primary on the machine with the model
sharedllm node --role primary \
  --model <path-to-gguf> \
  --coordinator-url http://<coord-ip>:8420

# worker on any other machine
sharedllm node --role worker \
  --coordinator-url http://<coord-ip>:8420 \
  --rpc-port 50052 --lan-addr <your-ip>:50052

Models need 512-aligned hidden dimensions for RPC tensor offload — TinyLlama, Llama 3, Phi-3 work; SmolLM2-360M does not. The multi-endpoint transport (RFC-0001) lets nodes advertise LAN, WAN, and relay candidates separately so primaries pick the best path.

Train a specialist

A specialist is a centrally-trained model that joins the federation by registering an RFC-0006 manifest. The minimum loop:

  1. Train on your domain corpus with whatever framework you use.
  2. Export the weights as GGUF (or as the FractalMoE-MoE format used by the bets harness).
  3. Write a manifest with model id, vocab, hidden dim, layer count, quantisation tag.
  4. Submit the manifest to the directory: a coordinator entry plus a gossip announcement.
  5. Add a per-user adapter slot. The federation default per-user adapter is 9 KB norm-only — see Bet 49.

Propose a bet

New research questions are welcome. The format is:

  • One file in experiments/bets/NN_short_name.py where NN is the next free number.
  • Module docstring with the hypothesis, strict / lenient / catastrophic criteria, and the run command. Pre-register the criteria — moving the goalposts after seeing the result is visible in git history.
  • Reuse experiments/bets/_common.py for registry setup, specialist loading, and result writing.
  • Write the result file to experiments/bets/results/NN_*.json, then run 00_rollup.py to regenerate SUMMARY.md.

Submit a hardened replacement

The most valuable contributions are the unglamorous ones — replacing a flimsy bet with a stricter version. If a bet relies on a single seed, runs without a negative control, or reads an overfit final-step number as a victory, write the disambiguating follow-up. Recent examples:

  • Bet 60 ran the negative control (random tokens) that should have run alongside Bet 37 from the start.
  • Bet 61 built the personalization-vs-regularization confusion matrix — own adapter wins by 5–29% margin per user.
  • Bet 62 retracted Bet 50's headline. K=100 didn't outperform K=1; K=1 was overfitting and we read the final-step number.

Submitting a falsification of an existing claim is treated as a success in this harness, not a defeat. The retraction itself is evidence the methodology is real.

Where to find us

  • Code: github.com/anthropics (repository link will be updated when the public repo is ready)
  • Bets index: experiments/bets/SUMMARY.md
  • Inter-machine messaging: agent-relay (Rust CLI) for contributor coordination across machines.
  • Open questions: real-WAN federation throughput, 1B+ scale personalization, on-device phone validation, Kerala IT@School pilot deployment.