Skip to content

feat(grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO)#5199

Open
casinca wants to merge 7 commits intohuggingface:mainfrom
casinca:VESPO
Open

feat(grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO)#5199
casinca wants to merge 7 commits intohuggingface:mainfrom
casinca:VESPO

Conversation

@casinca
Copy link
Contributor

@casinca casinca commented Feb 27, 2026

What does this PR do?

This PR implements the VESPO loss, resolve #5196

Official implementation: https://github.com/FloyedShen/VESPO/blob/main/recipe/vespo/code/core_algos.py
Paper: https://huggingface.co/papers/2602.10693

Note:

  • The paper and the official implementation can have different variable names, to make things clearer:

    • c1 = k = α
    • c2 = lambda
  • Docstrings/comments are a mix of official impl and my writing.

 

Alternative options:

  • Currently VESPO has 4 hparams k_pos, lambda_pos, k_neg, lambda_neg but I could reduce with 2 tuples of 2 floats eg: lambdas (pos, neg) if it's better.
  • Original impl also returns for metrics w_seq. I can include it in metrics, but this would force me to return a tuple in get_gamma_weights or remove @staticmethod. Not sure here what's the preference.

 

For efficiency, the TRL VESPO implementation is slightly different than the official one. It's ~25% faster per call on gpu, and tested for equivalence.

With importance_sampling_ratio:
-----------------------------------------------------------------
B x T         TRL_VESPO (ms)    OG_VESPO (ms)     Faster
-----------------------------------------------------------------
8 x 128         0.4290          0.5301          TRL_VESPO (1.24x)
16 x 256        0.4281          0.5302          TRL_VESPO (1.24x)
32 x 512        0.4283          0.5299          TRL_VESPO (1.24x)
64 x 512        0.4284          0.5294          TRL_VESPO (1.24x)
128 x 512       0.4286          0.5322          TRL_VESPO (1.24x)
32 x 1024       0.4473          0.5313          TRL_VESPO (1.19x)
64 x 1024       0.4285          0.5360          TRL_VESPO (1.25x)
128 x 1024      0.4240          0.5203          TRL_VESPO (1.23x)
-----------------------------------------------------------------

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@casinca casinca changed the title init feat(grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO) Feb 27, 2026
@casinca casinca marked this pull request as ready for review March 1, 2026 19:22
@casinca
Copy link
Contributor Author

casinca commented Mar 2, 2026

I owe some better explanations to facilitate the review concerning importing math for lower_clamp = math.log(1e-8) in get_gamma_weights

From the original implementation below, the author is recomputing the log_w_seq from w_seq but we already have the log from seq_log_ratio_clampled. The only difference for recomputing, is the range of the min clamp being reduced to min=1e-8.

image

 

In order to avoid a 2nd log op in TRL, I'm directly clamping in logspace log_w_seq = torch.clamp(seq_log_ratio, lower_clamp, 20.0) once. Which ends up being the same.

This is solely to follow the original implementation, otherwise I'm not really sure if reducing from $e^{-20}$ to $e^{-18.42}$ (ie $e^{log(1e-8)}$) is important. I had opened an issue in OP for this: FloyedShen/VESPO#6

If keeping the original logic and importing math is problematic, alternative could be to hardcode log(1e-8) or retrieved from a tensor torch.log(torch.tensor(1e-8))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Variational Sequence-Level Soft Policy Optimization (VESPO)

1 participant