Skip to content

Conversation

@luccabb
Copy link
Owner

@luccabb luccabb commented Jan 20, 2026

Summary

  • Add Late Move Reductions (LMR) to reduce search depth for late quiet moves
  • Add Principal Variation Search (PVS) for faster searches with good move ordering

Details

Late Move Reductions (LMR)

Reduce search depth for moves that are unlikely to be the best:

Conditions:

  • Depth >= 3 (need sufficient depth to reduce)
  • Move index >= 3 (first few moves get full depth)
  • Not in check (check positions are critical)
  • Quiet move (no capture, no check, no promotion)

Implementation:

reduction = 1  # Reduce by 1 ply
board_score = negamax(depth - 1 - reduction, ...)
if board_score > alpha:
    board_score = negamax(depth - 1, ...)  # Re-search at full depth

Principal Variation Search (PVS)

Optimize search based on the assumption that the first move is best:

Implementation:

  • First move: full alpha-beta window search
  • Later moves: zero window search first (faster)
  • If zero window beats alpha, re-search with full window
if move_index == 0:
    score = negamax(alpha=-beta, beta=-alpha)  # Full window
else:
    score = negamax(alpha=-alpha-1, beta=-alpha)  # Zero window
    if score > alpha:
        score = negamax(alpha=-beta, beta=-alpha)  # Re-search

Test plan

  • All 64 unit tests pass
  • Verified LMR only applies to late quiet moves
  • Verified PVS re-searches when needed

🤖 Generated with Claude Code

@luccabb luccabb force-pushed the feature/iterative-deepening-aspiration branch from f205ab8 to cda6ab5 Compare January 21, 2026 06:41
@luccabb luccabb force-pushed the feature/iterative-deepening-aspiration branch from cda6ab5 to 0a7dc0c Compare January 21, 2026 06:44
@luccabb luccabb changed the title [7/9] Add Late Move Reductions (LMR) and Principal Variation Search (PVS) [5/7] Add Late Move Reductions (LMR) and Principal Variation Search (PVS) Jan 21, 2026
@luccabb luccabb force-pushed the feature/iterative-deepening-aspiration branch from 0a7dc0c to 134def0 Compare January 21, 2026 07:33
…PVS)

Implements two key search optimizations:

**Late Move Reductions (LMR):**
- Reduce search depth for late quiet moves (move_index >= 3)
- Only apply when: depth >= 3, not in check, move is quiet
- Quiet moves = no capture, no check, no promotion
- Simple reduction of 1 ply (more aggressive formulas tested but hurt accuracy)
- Re-search at full depth if reduced search finds promising score

**Principal Variation Search (PVS):**
- First move: search with full alpha-beta window
- Later moves: search with zero window (alpha, alpha+1)
- If zero window search beats alpha, re-search with full window
- Saves time when first move is best (which is often true with good ordering)

Both techniques work together:
- PVS assumes first move is best (good with TT/killer/MVV-LVA ordering)
- LMR reduces work on moves unlikely to be best
- Combined, they significantly reduce nodes searched

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@luccabb luccabb changed the title [5/7] Add Late Move Reductions (LMR) and Principal Variation Search (PVS) [5/6] Add Late Move Reductions (LMR) and Principal Variation Search (PVS) Jan 21, 2026
luccabb and others added 5 commits January 22, 2026 12:23
This adds a chess reinforcement learning environment following the
OpenEnv interface pattern, with both local and HTTP client-server modes.

Features:
- ChessEnvironment class with configurable rewards, opponents, and game limits
- FastAPI server with REST endpoints (/reset, /step, /state, /engine-move)
- HTTP client for remote environment access
- Web UI for playing against the engine
- HuggingFace Spaces deployment configuration (Dockerfile, openenv.yaml)
- Example training scripts for local and remote usage

Also includes:
- mypy configuration for optional RL dependencies
- Import formatting fixes for ufmt compliance
* Add OpenEnv-compatible RL environment with HuggingFace Space

This adds a chess reinforcement learning environment following the
OpenEnv interface pattern, with both local and HTTP client-server modes.

Features:
- ChessEnvironment class with configurable rewards, opponents, and game limits
- FastAPI server with REST endpoints (/reset, /step, /state, /engine-move)
- HTTP client for remote environment access
- Web UI for playing against the engine
- HuggingFace Spaces deployment configuration (Dockerfile, openenv.yaml)
- Example training scripts for local and remote usage

Also includes:
- mypy configuration for optional RL dependencies
- Import formatting fixes for ufmt compliance

* Remove Elo claim and fix GitHub link to open in new tab
Fixes:
- Remove incorrect `bash .env` line (was trying to execute .env as script)
- Add `set -e` to exit on errors
- Check if brew is installed before using it
- Check if git-lfs/envsubst already installed before reinstalling
- Validate build succeeded before continuing
- Verify dist/moonfish exists before copying
- Check if lichess-bot directory exists
- Validate LICHESS_TOKEN is set after sourcing .env
- Validate token is not empty when creating .env
- Use `cp -f` instead of rm + cp

Improvements:
- Make lichess-bot directory configurable via LICHESS_BOT_DIR env var
- Add progress messages for better UX
- Provide helpful error messages with next steps

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Add Stockfish benchmark CI workflow

- Runs cutechess-cli matches against Stockfish on every PR
- 20 rounds with max concurrency
- Moonfish: 60s per move, Stockfish: Skill Level 5 with 60+5 time control
- Downloads full 170MB opening book from release assets (bypasses LFS)
- Reports win/loss/draw stats in GitHub job summary
- Uploads PGN and logs as artifacts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Parallelize Stockfish benchmark with matrix strategy

- Run 20 parallel jobs (10 chunks × 2 skill levels)
- Test against both Stockfish skill level 4 and 5
- 100 games per skill level = 200 total games for reliable signal
- Add aggregation job to combine results with summary table
- Use different random seeds per chunk for opening variety

* Add PR comment with benchmark results

- Post aggregated results as a comment on the PR
- Makes it easy to see win/loss/draw rates without navigating to CI
- Includes collapsible configuration details

* Add -repeat flag for more consistent benchmark results

- Each opening is played twice with colors reversed
- Eliminates first-move advantage variance
- Doubles games to 400 total (200 per skill level)
- More statistically reliable results between runs

* Add detailed stats to benchmark PR comment

- Show win rates by color (as White / as Black)
- Show loss reasons (timeout, checkmate, adjudication)
- Separate tables per skill level for clarity

* Fix termination parsing and correct game count

- Parse game endings from PGN move text (cutechess format)
- Track: checkmate, timeout, resignation, stalemate, repetition, 50-move
- Fix config: 200 total games (not 400)

* Simplify game endings - parse merged PGN directly

- Remove per-chunk termination tracking
- Parse game endings from merged PGN in aggregate step
- Cleaner and less error-prone

* Extract game endings dynamically from PGN text

* Filter out mates from game endings (redundant with win/loss)

* Rename to 'Non-checkmate endings'

* Add skill level 3 and skip aggregate if all jobs cancelled

- Test against Stockfish skill levels 3, 4, and 5 (300 total games)
- Only run aggregate job if at least one benchmark succeeded

* Hardcode concurrency to 10 for faster benchmarks

* Increase to 20 rounds and 20 concurrency (600 total games)

* Reduce to 5 chunks (15 total jobs, 300 games)

* Add PR reactions: eyes on start, thumbs up on complete

- React with 👀 when benchmark starts
- React with 👍 after results are posted

* Add local benchmark script

* Add skill level 1, increase to 200 games per level (800 total)

* Revert CI changes, update local script: skill level 1, 200 games/level

* Add skill level 2 to local benchmark script

* Update benchmark settings

- Local script: 100 rounds, 15 concurrency
- CI: Remove eyes reaction when adding thumbs up

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
luccabb and others added 2 commits January 27, 2026 11:18
* Only run benchmarks when engine code changes

* Remove lichess from path filter (not engine code)

* Run benchmarks on PRs to any branch, not just master
@github-actions
Copy link

🔬 Stockfish Benchmark Results

vs Stockfish Skill Level 3

Metric Wins Losses Draws Total Win %
Overall 0 100 0 100 0%
As White 0 50 0 50 0%
As Black 0 50 0 50 0%

vs Stockfish Skill Level 4

Metric Wins Losses Draws Total Win %
Overall 0 100 0 100 0%
As White 0 50 0 50 0%
As Black 0 50 0 50 0%

vs Stockfish Skill Level 5

Metric Wins Losses Draws Total Win %
Overall 0 100 0 100 0%
As White 0 50 0 50 0%
As Black 0 50 0 50 0%
Configuration
  • 5 chunks × 20 rounds × 3 skill levels = 300 total games
  • Each opening played with colors reversed (-repeat) for fairness
  • Moonfish: 60s per move
  • Stockfish: 60+5 time control

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants