Merkavian builds cryptographically verified AI infrastructure. We make neural network training trustless through zero-knowledge proofs and decentralized federated learning.
Two systems at the frontier of AI and decentralized infrastructure.
A proof-of-learning blockchain protocol that uses zero-knowledge proofs to verify gradient updates from distributed miners — making AI model training cryptographically trustless.
Explore protocol →A self-evolving algorithmic trading system with multi-timeframe signal analysis, reinforcement learning, and an evolutionary engine that continuously generates and validates new strategies.
View system →Cryptographic verification of neural network computations without revealing training data or model weights. Built on Halo2 circuits via EZKL.
Distributed model training across independent miners using Federated Averaging. No raw data sharing required — privacy preserved by design.
Smart contracts deployed on Base Sepolia handle task posting, gradient submission, ZK proof verification, and token reward distribution.
50+ parallel strategy experiments with automated evaluation, promotion, and mutation — continuously searching for better trading approaches.
MTF trend confirmation, VWAP bands, funding rate analysis, support/resistance detection, and pattern recognition — all validated against 5.4M candles.
Kelly Criterion position sizing with six dynamic multipliers including confidence, volatility, pair quality, and consecutive performance tracking.
Interested in our research or technology? We'd like to hear from you.
jackson@merkavian.comA proof-of-learning protocol demonstrating that AI model training can be made cryptographically verifiable on a live blockchain — without a trusted aggregator.
Leading AI companies face a fundamental economic crisis. In 2024, OpenAI spent $3B on model training against $3.7B in revenue. Anthropic spent $1.5B against $2.55B. Profitability under centralized training economics is nearly impossible.
Blockchain technology offers a different model — AI developers could issue tokens tied to model development, allowing funding to come directly from the market rather than burning capital. Miners earn tokens by contributing verified gradient updates. Token holders benefit as the model improves.
The critical unsolved problem: without a trusted aggregator, how do you verify that miners actually trained on real data and achieved the score they claimed?
Zero-knowledge proofs make fraud cryptographically impossible. Each gradient submission is paired with a ZK proof that confirms training occurred correctly — without revealing any training data or model weights.
PoLChain demonstrates this system end-to-end on a live blockchain. Four automated miners compete each block, submitting gradients alongside Halo2 ZK proofs generated via EZKL. The best verified gradient wins $POL tokens.
Five components work together in each block cycle:
The central contribution is applying ZK proofs to neural network gradient verification. This requires converting floating-point ML operations into integer arithmetic over a finite field — what we call the gradient-gap problem. For simpler models like MNIST, the quantization error is small enough to be inconsequential. Solving gradient-gap at scale is the prerequisite for applying this to production models.
A self-evolving algorithmic trading system that uses multi-signal analysis, reinforcement learning, and an evolutionary engine to continuously improve its own strategies.
The system runs continuously on a dedicated server, monitoring 15 cryptocurrency pairs across Kraken. It evaluates every potential trade through a seven-layer signal stack before execution, and runs 50 parallel experimental strategies simultaneously — killing underperformers and promoting winners.
The architecture is designed around one principle: no human intervention required. The bot learns from every trade, adapts its parameters automatically, and improves its own signal weights based on historical performance.
Every trade consideration passes through seven independent signal layers before execution:
50 parallel sandbox experiments run at all times, each testing a mutation of the champion strategy. Every six hours the evaluator scores all experiments and kills underperformers. The experimenter then spawns new mutations weighted by lessons learned from historical training.
Historical training runs against 5.4M candles across 7 market regimes — bull, bear, sideways, high volatility, deep bear, crypto winter, and the last 30 days — to validate strategy robustness before promoting to champion.
The system learns from every trade through a closed feedback loop. Trade forensics record 15+ signal fields per closed trade. The self-tuner analyzes effectiveness after 20 trades and adjusts signal weights. The experimenter reads signal performance and boosts mutations that match what's working in live conditions.
Merkavian was founded on the belief that the next breakthrough in AI economics is decentralization. The current model — where a handful of companies bear the entire cost of model training — is fundamentally unsustainable.
We're building the infrastructure layer that makes distributed AI training cryptographically verifiable. Our research prototype, PoLChain, demonstrates that zero-knowledge proofs can solve the honesty problem in federated learning — making it possible for anyone to contribute compute and earn tokens without requiring a trusted central aggregator.
Alongside our research, we develop production systems at the frontier of applied AI — including an autonomous trading system with a self-evolving signal stack validated against 5.4 million historical candles across four years of market data.
Merkavian is headquartered in Berkeley, California, and founded by a student researcher at UC Berkeley working at the intersection of cryptography, machine learning, and decentralized systems.