环境配置如下:Python=3.12.0, vllm==0.11.0, transformers==5.2.0,使用vllm进行多机多卡部署时执行命令vllm serve model/Thinking-with-Map-30B-A3B/ --distributed-executor-backend ray --tensor-parallel-size 2 --port 8002时,报错AttributeError: Qwen2Tokenizer has no attribute all_special_tokens_extended. Did you mean: 'num_special_tokens_to_add'?
请问该如何调整