mirror of
https://www.modelscope.cn/OpenBMB/MiniCPM-o-2_6-int4.git
synced 2025-04-16 14:49:32 +08:00
1.5 KiB
1.5 KiB
pipeline_tag | datasets | library_name | language | tags | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
any-to-any |
|
transformers |
|
|
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone
MiniCPM-o 2.6 int4
This is the int4 quantized version of MiniCPM-o 2.6.
Running with int4 version would use lower GPU memory (about 9GB).
Prepare code and install AutoGPTQ
We are submitting PR to officially support minicpm-o 2.6 inference
git clone https://github.com/OpenBMB/AutoGPTQ.git && cd AutoGPTQ
git checkout minicpmo
# install AutoGPTQ
pip install -vvv --no-build-isolation -e .
Usage of MiniCPM-o-2_6-int4
Change the model initialization part to AutoGPTQForCausalLM.from_quantized
import torch
from transformers import AutoModel, AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized(
'openbmb/MiniCPM-o-2_6-int4',
torch_dtype=torch.bfloat16,
device="cuda:0",
trust_remote_code=True,
disable_exllama=True,
disable_exllamav2=True
)
tokenizer = AutoTokenizer.from_pretrained(
'openbmb/MiniCPM-o-2_6-int4',
trust_remote_code=True
)
model.init_tts()
Usage reference MiniCPM-o-2_6 Usage section.