TrackMania RL - Documentation
Welcome to the TrackMania RL project documentation!
This is a fork and extension of the original Linesight project, adapted for reinforcement learning experiments in Trackmania Nations Forever.
The project uses distributional reinforcement learning (IQN - Implicit Quantile Networks) to train an AI agent to drive in TrackMania. The goal is to explore RL algorithms, reward shaping, and training techniques in a complex racing environment.
Key Features:
Distributional RL with IQN (Implicit Quantile Network)
Modular configuration system for easy experimentation
Support for multiple parallel game instances
Hot-reloadable training parameters
TensorBoard integration for monitoring
Virtual checkpoint system for dense progress tracking
All runs produced by this project are Tool Assisted. They must not be submitted to the Official Leaderboards.
User Documentation:
- Installation
- Getting started
- Custom training
- Configuration Guide
- TMNF replay download and frame capture
- Pipeline: steps to run (in order)
- How it works (details)
- Module layout (replays_tmnf)
- Modes and options (download)
- Pipeline (–track-ids)
- Filter tracks (step 3): filter_track_ids_no_respawn.py
- Filter tracks (step 3a): filter_track_ids_custom_maptype.py
- Main arguments (download)
- Extracting map from replay
- Frame capture (capture_replays_tmnf.py)
- Examples (from project root)
- Level 0 visual pretraining on captured frames
- API (TMNF-X / ManiaExchange)
- TensorBoard Metrics Reference
- User FAQ
- Troubleshooting
Dev Documentation:
Experiments:
Community tips & tricks