Skip to content

MUCH: A light text claim segmenter for hallucination detection.

License

Notifications You must be signed in to change notification settings

orailix/much_segmenter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MUCH-segmenter: A fast claim segmentation algorithm

This package implements much_segmenter, a fast, deterministic, and compute-efficient claim segmentation algorithm designed for English, French, Spanish, and German. This algorithm was introduced in our paper:

MUCH: A Multilingual Claim Hallucination Benchmark

Jérémie Dentan1, Alexi Canesse1, Davide Buscaldi1, 2, Aymen Shabou3, Sonia Vanier1

1LIX (École Polytechnique, IP Paris, CNSR), 2LIPN (Université Sorbonne Paris Nord), 3Crédit Agricole SA

https://arxiv.org/abs/2511.17081

Usage and example

The main function of this package is much_segmentation, which segments an LLM generation into token chunks.

Example

In this example, the LLM generation contains 12 tokens. Our claim segmentation algorithm splits this generation into 3 claims: the first contains tokens 0-3 ("No, Xining"), the second tokens 4-7 (" is the largest city"), and the last claim contains tokens 8-11 (" in Qinghai.").

# Imports
from much_segmenter import much_segmentation, get_repr_string
from transformers import AutoTokenizer

# Defining the generation and the tokenizer
generation = "No, Xining is the largest city in Qinghai."
llm_tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b-instruct")

# Segmentation
token_chunks = much_segmentation(generation, llm_tokenizer)
print(token_chunks) # Should be [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]

# Display of the result
print(get_repr_string(generation, token_chunks, tokenizer=llm_tokenizer))

# Output should be:
"""
<Segmentation>
# 0 : No, Xining
# 1 :  is the largest city
# 2 :  in Qinghai.
"""

Pre-computed tokens

Modern tokenizers are not idempotent. For example, the LLM can generate a sequence of output_tokens that are decoded into generation = tokenizer.decode(output_tokens). However, it is possible that sometimes tokenizer.encode(generation) != output_tokens. This can happen because the same text can be encoded in several ways, and the path chosen by the tokenizer may differ from the one obtained during LLM generation.

This behavior can be problematic because the output of much_segmenter is token indices, so any mismatch between the output of much_segmenter and the tokens generated by the LLM can lead to computation errors. Consequently, much_segmenter includes an optional precomputed_tokens, which should contain the output tokens as generated by the LLM.

⚠️ This optional parameter should ALWAYS be used when the output tokens are known, to avoid any token mismatch during segmentation ⚠️

Pseudo-code and algorithmic details

Our segmentation algorithm is fully rule-based and does not require external models or internet access, making it suitable for offline or computation-limited use cases. It is designed for English, French, Spanish, and German. We retain only these four European languages because their stopword and punctuation systems are similar. We expect our segmenter to be easily adaptable to languages with similar punctuation and stopwords, although we have not tested it beyond the four languages mentioned.

Our algorithm includes two main steps. First, we split the LLM generation into words using an external word tokenizer, and we use these words to identify the character indices of claim starts. Second, we map these character indices to the tokens of the LLM generation. For a detailed presentation of this algorithm and a discussion of its pseudo-code, please refer to our research paper available on arXiv: https://arxiv.org/abs/2511.17081.

Runtime

This claim segmentation algorithm was designed to be extremely fast. Segmenting the entire MUCH dataset took 6s. This dataset includes 4,873 samples containing a total of 392,022 characters, representing 101,917 output tokens that were segmented into 25,624 claims (20,751 claims after removing the final claims containing only the EOS token). For reference, the LLM generation runtime for these samples was 2,758s, meaning that segmentation represents only a 0.2% overhead.

These runtimes are single-process and single-thread measurements; segmentation can be further accelerated with parallel computing.

Related artifacts

This package is released alongside the MUCH benchmark. This benchmark includes the following resources, which you might explore to see applications of our claim segmentation algorithm:

Acknowledgement

This work received financial support from the research chair Trustworthy and Responsible AI at École Polytechnique.

This work was granted access to the HPC resources of IDRIS under the allocation AD011014843R1, made by GENCI.

Copyright and License

Copyright 2025–present Laboratoire d’Informatique de l’École Polytechnique.

This repository is released under the Apache-2.0 license.

Please cite this work as follows:

@misc{dentan_much_2025,
  title = {MUCH: A Multilingual Claim Hallucination Benchmark},
  author = {Dentan, Jérémie and Canesse, Alexi and Buscaldi, Davide and Shabou, Aymen and Vanier, Sonia},
  year = {2025},
  url = {https://arxiv.org/abs/2511.17081},
}

About

MUCH: A light text claim segmenter for hallucination detection.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages