Secure aggregation, per-participant differential privacy, byzantine tolerance, and async round management — delivered as a trainer that your ML team doesn't have to babysit.
The aggregator sees only the sum. Individual gradient contributions are cryptographically indistinguishable from noise.
Track spend across rounds. Halt when exhausted. Configurable clipping and noise multipliers.
Coordinate-wise trimmed mean, Krum, and RFA available. Convergence preserved under attack.
Our staleness-aware aggregator converges under arbitrary site availability. Reference: Chen et al., ICML 2025.