An open specification for structured multi-model AI deliberation. Byzantine Fault Tolerance meets epistemic architecture — measuring what happens when AI models disagree under human moderation.
Emergent Collaborative Deliberation in Multi-Model AI Systems: A BFT-Derived Protocol for Epistemic Synthesis
Large language models trained on substantially overlapping corpora produce convergent epistemic outputs — similar answers to the same questions, reinforced through shared training data and alignment procedures. This paper introduces the Consilium Protocol, a Byzantine Fault Tolerance-derived architecture for structured multi-model deliberation under human moderation. Rather than treating model agreement as validation, the protocol treats disagreement as signal and measures convergence dynamics across 1,478 deliberation sessions spanning 32 topics and 10 epistemic categories. We find that alignment procedures (RLHF) create measurable domain-specific suppression effects — a 12.3 percentage point gap between historically-grounded and normatively-charged claims — and that composable cognitive personas predict deliberation outcomes more reliably than model selection alone. The system achieves reproducible results (±2.2% variance) at commodity cost, with 82% of analytical sessions running on free-tier models.
RLHF creates epistemic blind spots. A 12.3 percentage point suppression gap between historical and normative claims — alignment procedures systematically reduce adversarial engagement on culturally sensitive topics.
The persona matters more than the model. Composable cognitive personas — not model selection — predict deliberation quality. 82% of sessions ran on free-tier models with no degradation in epistemic rigour.
Convergence is a three-variable function. Deliberation outcomes are predicted by evidence availability, cultural charge, and RLHF pressure — not by model capability or panel size.
Truth arbitration is structurally impossible. Data is live. Training is static. Any system claiming to determine truth operates on a decaying snapshot. The honest product is a tested claim chain with explicit justification status.
If models converge, you lose the signal. The Consilium Protocol is designed so that the most valuable output is not agreement — it is the structured map of where models disagree, and why.