Back to articles

Execution research

15-Minute Trading on Polymarket

Phase 2: FADE and momentum modes, vectorized sweeps, live state machines, and the data-driven decision to stop when EV stayed near zero.

Phase 028 min7 sections
Python
WebSockets
Vectorized Backtesting
State Machines
Walk-forward
Volatility Regimes
CLOB
Flip Analysis
Execution Logic
Calibration

Polymarket journey

Reading guide

Phase 02 of the broader case study.

Chapter

Phase 02

Execution research

Reading time

8 min

7 structured sections

The strongest output of the 15-minute system was the confidence to stop it.
See the full Polymarket journey

Opening note

A shorter, more readable version of the original archive entry, focused on the parts that remained technically useful.

15-Minute Trading on Polymarket

Phase 2: vectorized research, live execution, server deployment, and the point where the market looked too efficient to keep forcing it.

After pausing long-horizon prediction, I moved to 15-minute Up/Down markets. The attraction was obvious: faster iteration, cleaner labels, and a much more direct path from research to execution.

This phase became much larger than a strategy notebook. It ended up including a simulator, a live state machine, a web dashboard, optimizer tooling, volatility regimes, and a server-ready package.

Execution map

The 15-minute project was a loop from research modes to realistic simulation to live execution and then back into diagnostics.

Click any node to jump to that part of the article
Vectorized sweepsVolatility-aware configsLive-state execution

The important shape here is cyclical: research informed live trading, and live failures fed directly back into better diagnostics.

Why the 15-minute setup was appealing

The hypothesis was that short expiry might still leave room for timing mistakes or order-book inefficiencies.

So the goal became:

  • record snapshots continuously,
  • enrich each slug with Binance volatility context,
  • sweep large parameter grids,
  • test modes such as FADE, MOMENTUM, REVERSE, and DUAL_ENTRY,
  • run a live system with real orders and real fills.

Compared with the earlier prediction work, this phase had a much better feedback loop.

What I actually built

The trading15m module is not just one script. It is a small execution environment.

Research layer

  • Historical and real-time snapshot generation into sim_snapshots.csv.
  • volatility_by_slug.csv with vol_15m, vol_1h, vol_4h, and low-med-high regimes.
  • Vectorized backtests with block configurations and 3D sweeps.
  • Walk-forward selection plus diversity by volatility regime.
  • A flip analyzer to measure when low-price zones are effectively dead zones.
  • An optimizer that writes best_config.json and an optimized strategy rules file.

Execution layer

  • Live state machine: FLAT -> ENTRY_WORKING -> OPEN -> EXITING -> ROLLING_OVER.
  • WebSockets for prices and fills with roughly ~50 ms responsiveness instead of ~500 ms REST-style polling.
  • Minimum live size rules such as 5 shares.
  • Proactive stop logic: trigger the stop order when price is about 0.10 away from the stop, not 0.02, so the order still has time to exist in the book.
  • Lock-pair logic when both sides create a favorable bounded-risk setup.

Operational layer

  • A terminal dashboard and a server package with start, stop, status, and log scripts.
  • A web dashboard for monitoring live trading remotely.
  • Explicit dry-run validation before using live credentials.
  • Logging, reconciliation, and cleanup paths for ghost orders or stale local state.

The backtest rules that mattered

The biggest improvement was not a new strategy. It was making the simulator stop flattering me.

Three rules changed everything:

  1. Sequence orders honestly inside a block: Stop, then take-profit, then entry.
  2. Model reaction latency with values such as 300-1000 ms.
  3. Add execution pain with slippage and liquidity constraints such as min_ask_size = 500.

The basic simulator also carried realistic frictions like fee_bps = 30 and slippage_bps = 10, while the stress tests pushed much harder when I wanted to see whether an edge survived pain.

That is the same realism mindset I summarize in How a Real Backtest Works.

SetupHit rate shapeInterpretation
No latencyExtremely high, sometimes near 98%Too optimistic to trust
300 ms reactionFalls sharply, often near the mid-50sMuch closer to reality
300 ms + 2% slippage + liquidity filterOften weak or negativeThe strategy now has to survive actual market structure

What live trading taught me

The live system made one thing clear very quickly: execution logic is mostly synchronization logic.

  • WebSockets had to be the source of truth for fills.
  • REST was useful for reconciliation, not for pretending to be real time.
  • Open orders and real inventory had to stay separate.
  • Partial fills, ghost orders, and rollover edge cases were normal conditions, not exceptions.

This is why the operational side of bots deserved its own synthesis in The Reality of Bots on Polymarket.

The result that mattered

Once the simulator became strict enough, the market looked much more disciplined than I wanted it to.

  • The 15-minute prices were well calibrated.
  • Expected value stayed around zero after fees.
  • Entry and exit timing did not unlock a durable edge.
  • The flip analyzer also reinforced that the cheapest late-minute zones were often "dead zones," not bargains.

That conclusion was disappointing emotionally, but excellent technically. The system had become honest enough to reject the thesis instead of flattering it.

Why this phase was still worth it

The project left behind real assets:

  • a reusable backtesting and optimization framework,
  • a live execution state machine,
  • server-friendly deployment packaging,
  • much clearer thinking about fill logic, volatility regimes, and synchronized state,
  • a direct bridge into HFT on Polymarket: Model, Rust, and the 98% Lie.

Most importantly, it replaced hope with evidence.

Takeaway

The 15-minute project did exactly what a good research phase should do: it reduced uncertainty with a system strong enough to say no.

It did not produce a strategy I wanted to keep running. It produced something more valuable than that: a repeatable proof that the easy edge was not there.