train.
submit one stochastic gradient step on the model. the contract checks that your update actually reduces validation loss before committing the new weights, so a successful call earns you ETH from the trainer pool scaled by how much the loss dropped, while an update that fails to improve simply reverts and costs you only gas. at most one training step lands per block, and the first valid call wins.
submit a training step
epoch size is how many recent observations get sampled for the gradient. larger values produce a smoother update at higher gas, smaller values give a noisier step at lower gas. minimum is 8.
recent training feed
why train()?
a static LP fee tier can never tell the difference between an informed swap and a casual one, so it charges the same rate for both, and the informed swaps end up extracting more value than the fee compensates for. a model that learns to score the incoming flow can charge informed swaps a higher fee and casual swaps a lower one. the brain's four weights are only a coarse first guess at that mapping, and a real trader who has been watching the market for a while tends to know things the contract has not yet learned.
train() lets you push that knowledge into the model directly. the contract takes the most recent observations, computes a gradient, moves the weights by one step, and then re-scores the held-out validation slice. the update is only committed when the loss strictly drops, otherwise the weights snap back to their previous values and your call reverts.
this is the entire on-chain learning loop: real data goes in, the gradient runs on chain, the improvement check decides what sticks, and the reward gets paid in WETH. nothing happens off-chain and nothing is optimistic. the model that runs against the next swap is the same model the chain has been training all along.