Skip to content

The demo not work, see the following exception #10

@ZHAMoonlight

Description

@ZHAMoonlight

(.venv) (base) ✘  🐍 base  xupeng@localhost  ~/workspace/git/GPA/scripts/inference  ↰ main  /Users/xupeng/workspace/git/GPA/.venv/bin/python gpa_inference.py --task stt
--src_audio_path "..

test_app_asr.wav

audios/test_app_asr.wav"
--gpa_model_path "${GPA_MODEL_DIR}"
--tokenizer_path "${GPA_MODEL_DIR}/glm-4-voice-tokenizer"
--bicodec_tokenizer_path "${GPA_MODEL_DIR}/BiCodec"
--text_tokenizer_path "${GPA_MODEL_DIR}"
Using device: cpu
Loading tokenizers...
Loading weights: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 249/249 [00:00<00:00, 3543.02it/s, Materializing param=layers.15.self_attn_layer_norm.weight]
/Users/xupeng/workspace/git/GPA/.venv/lib/python3.11/site-packages/transformers/modeling_rope_utils.py:927: FutureWarning: rope_config_validation is deprecated and has been removed. Its functionality has been moved to RotaryEmbeddingConfigMixin.validate_rope method. PreTrainedConfig inherits this class, so please call self.validate_rope() instead. Also, make sure to use the new rope_parameters syntax. You can call self.standardize_rope_params() in the meantime.
warnings.warn(
Traceback (most recent call last):
File "/Users/xupeng/workspace/git/GPA/scripts/inference/gpa_inference.py", line 297, in
main()
File "/Users/xupeng/workspace/git/GPA/scripts/inference/gpa_inference.py", line 265, in main
inference = GPAInference(
^^^^^^^^^^^^^
File "/Users/xupeng/workspace/git/GPA/scripts/inference/gpa_inference.py", line 31, in init
self._load_models()
File "/Users/xupeng/workspace/git/GPA/scripts/inference/gpa_inference.py", line 38, in _load_models
self.text_tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xupeng/workspace/git/GPA/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xupeng/workspace/git/GPA/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1751, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/xupeng/workspace/git/GPA/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2003, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xupeng/workspace/git/GPA/.venv/lib/python3.11/site-packages/transformers/models/qwen2/tokenization_qwen2.py", line 89, in init
super().init(
File "/Users/xupeng/workspace/git/GPA/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_tokenizers.py", line 376, in init
self._tokenizer = self._patch_mistral_regex(
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: transformers.tokenization_utils_tokenizers.TokenizersBackend._patch_mistral_regex() got multiple values for keyword argument 'fix_mistral_regex'
(.venv) (base) ✘  🐍 base  xupeng@localhost  ~/workspace/git/GPA/scripts/inference  ↰ main 

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions