Skip to content

Conversation

@luccabb
Copy link
Owner

@luccabb luccabb commented Jan 20, 2026

Summary

  • Implement iterative deepening (search depth 1, 2, 3, ... N)
  • Add aspiration windows for faster searches
  • Add TT move ordering (best move from previous search tried first)

Details

Iterative Deepening

Instead of searching directly to depth N, we search depth 1, then 2, then 3, etc:

for depth in range(1, target_depth + 1):
    score, move = negamax(board, depth, ...)

Benefits:

  • TT entries from depth N-1 improve move ordering at depth N
  • Enables future time management (stop when time runs out)
  • Killer moves accumulate across iterations

Aspiration Windows

After the first iteration, we use a narrow window around the previous score:

  • Initial window: ±50 centipawns
  • If search fails high/low, double the window and re-search
  • Fall back to full window if window exceeds ±500 cp

This reduces the number of nodes searched when the score is stable.

TT Move Ordering

  • Extract best move from TT lookup even if score can't be used
  • Put TT move first in the move list before all other moves
  • TT move is the best move found at a shallower depth, excellent for ordering

Test plan

  • All 64 unit tests pass
  • Verified iterative deepening visits all depths

🤖 Generated with Claude Code

@luccabb luccabb force-pushed the feature/mvv-lva-killer-moves branch from 0b53062 to 944a0e0 Compare January 21, 2026 06:40
@luccabb luccabb force-pushed the feature/iterative-deepening-aspiration branch from f205ab8 to cda6ab5 Compare January 21, 2026 06:41
@luccabb luccabb force-pushed the feature/mvv-lva-killer-moves branch from 944a0e0 to f1001bf Compare January 21, 2026 06:44
@luccabb luccabb force-pushed the feature/iterative-deepening-aspiration branch from cda6ab5 to 0a7dc0c Compare January 21, 2026 06:44
@luccabb luccabb changed the title [6/9] Add iterative deepening with aspiration windows and TT move ordering [4/7] Add iterative deepening with aspiration windows and TT move ordering Jan 21, 2026
@luccabb luccabb force-pushed the feature/mvv-lva-killer-moves branch from f1001bf to 98ee9e1 Compare January 21, 2026 07:33
…ering

Implements iterative deepening for better move ordering and future time management:

**Iterative Deepening:**
- Search depth 1, then 2, then 3, ... up to target depth
- Cache persists across all iterations (TT entries reused)
- Killer moves persist across iterations
- Best move from depth N-1 is tried first at depth N (via TT)

**Aspiration Windows:**
- After depth 1, use narrow window (±50 centipawns) around previous score
- If search fails outside window, re-search with doubled window
- Falls back to full window after 500cp expansion
- Reduces nodes searched when score is stable

**TT Move Ordering:**
- Save best move from TT lookup even if score can't be used
- Put TT move first in move list before searching
- Significantly improves move ordering at all depths

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@luccabb luccabb force-pushed the feature/iterative-deepening-aspiration branch from 0a7dc0c to 134def0 Compare January 21, 2026 07:33
@luccabb luccabb changed the title [4/7] Add iterative deepening with aspiration windows and TT move ordering [4/6] Add iterative deepening with aspiration windows and TT move ordering Jan 21, 2026
luccabb and others added 5 commits January 22, 2026 12:23
This adds a chess reinforcement learning environment following the
OpenEnv interface pattern, with both local and HTTP client-server modes.

Features:
- ChessEnvironment class with configurable rewards, opponents, and game limits
- FastAPI server with REST endpoints (/reset, /step, /state, /engine-move)
- HTTP client for remote environment access
- Web UI for playing against the engine
- HuggingFace Spaces deployment configuration (Dockerfile, openenv.yaml)
- Example training scripts for local and remote usage

Also includes:
- mypy configuration for optional RL dependencies
- Import formatting fixes for ufmt compliance
* Add OpenEnv-compatible RL environment with HuggingFace Space

This adds a chess reinforcement learning environment following the
OpenEnv interface pattern, with both local and HTTP client-server modes.

Features:
- ChessEnvironment class with configurable rewards, opponents, and game limits
- FastAPI server with REST endpoints (/reset, /step, /state, /engine-move)
- HTTP client for remote environment access
- Web UI for playing against the engine
- HuggingFace Spaces deployment configuration (Dockerfile, openenv.yaml)
- Example training scripts for local and remote usage

Also includes:
- mypy configuration for optional RL dependencies
- Import formatting fixes for ufmt compliance

* Remove Elo claim and fix GitHub link to open in new tab
Fixes:
- Remove incorrect `bash .env` line (was trying to execute .env as script)
- Add `set -e` to exit on errors
- Check if brew is installed before using it
- Check if git-lfs/envsubst already installed before reinstalling
- Validate build succeeded before continuing
- Verify dist/moonfish exists before copying
- Check if lichess-bot directory exists
- Validate LICHESS_TOKEN is set after sourcing .env
- Validate token is not empty when creating .env
- Use `cp -f` instead of rm + cp

Improvements:
- Make lichess-bot directory configurable via LICHESS_BOT_DIR env var
- Add progress messages for better UX
- Provide helpful error messages with next steps

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Add Stockfish benchmark CI workflow

- Runs cutechess-cli matches against Stockfish on every PR
- 20 rounds with max concurrency
- Moonfish: 60s per move, Stockfish: Skill Level 5 with 60+5 time control
- Downloads full 170MB opening book from release assets (bypasses LFS)
- Reports win/loss/draw stats in GitHub job summary
- Uploads PGN and logs as artifacts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Parallelize Stockfish benchmark with matrix strategy

- Run 20 parallel jobs (10 chunks × 2 skill levels)
- Test against both Stockfish skill level 4 and 5
- 100 games per skill level = 200 total games for reliable signal
- Add aggregation job to combine results with summary table
- Use different random seeds per chunk for opening variety

* Add PR comment with benchmark results

- Post aggregated results as a comment on the PR
- Makes it easy to see win/loss/draw rates without navigating to CI
- Includes collapsible configuration details

* Add -repeat flag for more consistent benchmark results

- Each opening is played twice with colors reversed
- Eliminates first-move advantage variance
- Doubles games to 400 total (200 per skill level)
- More statistically reliable results between runs

* Add detailed stats to benchmark PR comment

- Show win rates by color (as White / as Black)
- Show loss reasons (timeout, checkmate, adjudication)
- Separate tables per skill level for clarity

* Fix termination parsing and correct game count

- Parse game endings from PGN move text (cutechess format)
- Track: checkmate, timeout, resignation, stalemate, repetition, 50-move
- Fix config: 200 total games (not 400)

* Simplify game endings - parse merged PGN directly

- Remove per-chunk termination tracking
- Parse game endings from merged PGN in aggregate step
- Cleaner and less error-prone

* Extract game endings dynamically from PGN text

* Filter out mates from game endings (redundant with win/loss)

* Rename to 'Non-checkmate endings'

* Add skill level 3 and skip aggregate if all jobs cancelled

- Test against Stockfish skill levels 3, 4, and 5 (300 total games)
- Only run aggregate job if at least one benchmark succeeded

* Hardcode concurrency to 10 for faster benchmarks

* Increase to 20 rounds and 20 concurrency (600 total games)

* Reduce to 5 chunks (15 total jobs, 300 games)

* Add PR reactions: eyes on start, thumbs up on complete

- React with 👀 when benchmark starts
- React with 👍 after results are posted

* Add local benchmark script

* Add skill level 1, increase to 200 games per level (800 total)

* Revert CI changes, update local script: skill level 1, 200 games/level

* Add skill level 2 to local benchmark script

* Update benchmark settings

- Local script: 100 rounds, 15 concurrency
- CI: Remove eyes reaction when adding thumbs up

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
luccabb and others added 2 commits January 27, 2026 11:18
* Only run benchmarks when engine code changes

* Remove lichess from path filter (not engine code)

* Run benchmarks on PRs to any branch, not just master
@github-actions
Copy link

🔬 Stockfish Benchmark Results

vs Stockfish Skill Level 3

Metric Wins Losses Draws Total Win %
Overall 6 94 0 100 6.0%
As White 3 47 0 50 6.0%
As Black 3 47 0 50 6.0%

vs Stockfish Skill Level 4

Metric Wins Losses Draws Total Win %
Overall 2 97 1 100 2.0%
As White 2 47 1 50 4.0%
As Black 0 50 0 50 0%

Non-checkmate endings:

  • Draw by 3-fold repetition: 1

vs Stockfish Skill Level 5

Metric Wins Losses Draws Total Win %
Overall 0 98 2 100 0%
As White 0 49 1 50 0%
As Black 0 49 1 50 0%

Non-checkmate endings:

  • Draw by 3-fold repetition: 2
Configuration
  • 5 chunks × 20 rounds × 3 skill levels = 300 total games
  • Each opening played with colors reversed (-repeat) for fairness
  • Moonfish: 60s per move
  • Stockfish: 60+5 time control

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants