【大模型】通义千问safetensors

2024-02-26 1144阅读

温馨提示:这篇文章已超过434天没有更新,请注意相关的内容是否还可用!

【大模型】通义千问safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge解决方法

  • 通义千问介绍
  • Requirements
  • 模型下载
  • 模型推理
  • 解决方法
  • 解决方法2

    通义千问介绍

    GitHub:https://github.com/QwenLM/Qwen

    Requirements

    • python 3.8及以上版本
    • pytorch 1.12及以上版本,推荐2.0及以上版本
    • 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)

      模型下载

      git clone https://www.modelscope.cn/qwen/Qwen-7B-Chat.git
      

      模型推理

      infer_qwen.py:

      from modelscope import AutoModelForCausalLM, AutoTokenizer
      from modelscope import GenerationConfig
      # Note: The default behavior now has injection attack prevention off.
      tokenizer = AutoTokenizer.from_pretrained("qwen/Qwen-7B-Chat", trust_remote_code=True)
      # use bf16
      # model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
      # use fp16
      # model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
      # use cpu only
      # model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="cpu", trust_remote_code=True).eval()
      # use auto mode, automatically select precision based on the device.
      model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval()
      # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
      # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
      # 第一轮对话 1st dialogue turn
      response, history = model.chat(tokenizer, "你好", history=None)
      print(response)
      # 第二轮对话 2nd dialogue turn
      response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history)
      print(response)
      # 第三轮对话 3rd dialogue turn
      response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history)
      print(response)
      

      执行推理时报错如下:

      root:/workspace/tmp/LLM# python infer_qwen.py
      2023-12-16 01:35:43,760 - modelscope - INFO - PyTorch version 2.0.1 Found.
      2023-12-16 01:35:43,762 - modelscope - INFO - TensorFlow version 2.10.0 Found.
      2023-12-16 01:35:43,762 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
      2023-12-16 01:35:43,883 - modelscope - INFO - Loading done! Current index file version is 1.9.1, with md5 5f21e812815a5fbb6ced75f40587fe94 and a total number of 924 components indexed
      The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to "AutoModelForCausalLM.from_pretrained".
      Try importing flash-attention for faster inference...
      Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
      Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm
      Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention
      Loading checkpoint shards:   0%|          | 0/8 [00:00
VPS购买请点击我

免责声明:我们致力于保护作者版权,注重分享,被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自自研大数据AI进行生成,内容摘自(百度百科,百度知道,头条百科,中国民法典,刑法,牛津词典,新华词典,汉语词典,国家院校,科普平台)等数据,内容仅供学习参考,不准确地方联系删除处理! 图片声明:本站部分配图来自人工智能系统AI生成,觅知网授权图片,PxHere摄影无版权图库和百度,360,搜狗等多加搜索引擎自动关键词搜索配图,如有侵权的图片,请第一时间联系我们,邮箱:ciyunidc@ciyunshuju.com。本站只作为美观性配图使用,无任何非法侵犯第三方意图,一切解释权归图片著作权方,本站不承担任何责任。如有恶意碰瓷者,必当奉陪到底严惩不贷!

目录[+]