Back to articles

Trading systems

The Reality of Bots on Polymarket

A practical note on fills, API workers, inventory reconciliation, and why bots are infrastructure rather than automatic edge.

Ops note5 min7 sections
Bots
WebSockets
API Workers
State Machines
Inventory Control
Partial Fills
Circuit Breakers
Execution Risk

Execution playbook

Reading guide

Ops note of the broader case study.

Chapter

Ops note

Trading systems

Reading time

5 min

7 structured sections

Bots are valuable infrastructure, but they are not a shortcut to edge.
See the full Polymarket case study

Opening note

A shorter, more readable version of the original archive entry, focused on the parts that remained technically useful.

The Reality of Bots on Polymarket

A practical note on states, fills, workers, and why bots are infrastructure rather than automatic edge.

The popular version of bot building is "write a signal, call the API, make money." The real version is slower and more technical. You spend most of your time on state reconciliation, execution discipline, failure recovery, and deciding which code is allowed to block.

That was one of the clearest lessons from the Polymarket work. The useful part was not discovering a magic strategy. It was learning what a bot has to become before it deserves to trade real money.

Bot control map

A real bot has to separate fast market awareness, explicit state, slow-path execution, and reconciliation plus safety.

Click any node to jump to that part of the article
Fast loop versus slow loopWebSockets as truthState reconciliation before alpha

This is intentionally a control map rather than a model diagram: most bot failures happen in coordination, not in the signal line.

A bot is mostly coordination software

The signal is only one component in a live system.

A production-minded bot has to:

  • read market data continuously,
  • maintain explicit states,
  • distinguish open orders from filled inventory,
  • react to fills from WebSockets instead of assuming intent equals execution,
  • recover from partial fills, stale acknowledgements, and local or exchange disagreement.

In the 15-minute system that meant keeping FLAT, ENTRY_WORKING, OPEN, and EXITING synchronized with actual exchange events, not with the branch of code that wanted something to happen.

WebSockets for truth, REST for administration

One practical shift was learning where each channel belongs.

  • WebSockets were the live source for market updates and fill awareness.
  • REST was still necessary for order placement and management, but it was too slow to act like a trading heartbeat.
  • In the research notes, WebSocket updates were treated around ~50 ms cadence while REST responses could sit closer to ~500 ms.

That gap changes architecture. If you treat REST like a real-time source of truth, you will misread the market and your own position.

The fast loop and the slow loop should not live together

The HFT branch made this separation more explicit.

The hot loop listened, updated state, built features, and decided. The blocking API work was pushed into a queue-backed worker so order placement could happen outside the critical path. That is a small architectural idea with huge operational consequences:

  • the decision loop stays responsive,
  • API jitter does not freeze the market loop,
  • failures become easier to isolate,
  • latency budgets stop being destroyed by avoidable blocking calls.

This is also why the project evolved toward Rust on the hot path and Python for orchestration, ETL, and ML. The split was not aesthetic. It matched the runtime constraints.

Risk rails matter more than people expect

Real bots need guardrails before they need ambition.

Some of the practical rails in the project were simple but important:

  • price safety bands such as 0.10 to 0.90,
  • pre-flight connectivity checks before enabling live actions,
  • circuit-breaker style logic when latency or state quality degraded,
  • minimum size rules in the 15-minute system,
  • explicit cancellation and reconciliation instead of assuming clean exits.

None of this looks glamorous in a screenshot. All of it matters more than a clever tweet-sized entry rule.

Good model metrics do not remove execution reality

One of the healthiest resets came from comparing research metrics with executable outcomes.

In the HFT stack, an adverse-selection classifier could look strong on standard ML metrics while still failing to produce attractive maker-style PnL once latency, queue position, and actual fills were considered. That gap is exactly why bot building has to be described as systems engineering, not just model building.

The same pattern appeared in the 15-minute work. Once fees, slippage, and live timing were included honestly, the market looked much closer to efficient than early backtests suggested.

What bots are actually good for

Bots are still extremely valuable when they sit on top of a real edge or a disciplined research process.

  • They enforce repeatability.
  • They capture detailed logs.
  • They expose hidden assumptions quickly.
  • They let you test operational hypotheses that manual trading never tests well.

What they do not do is create edge just because they are automated.

Closing note

The strongest takeaway from Polymarket was not "bots work" or "bots do not work." It was that a serious bot is an execution system with data plumbing, state discipline, and risk controls attached. If the underlying edge is weak, the bot will reveal that faster. If the process is strong, the bot makes it measurable.