mirror of
https://www.modelscope.cn/OpenBMB/MiniCPM-o-2_6-int4.git
synced 2025-08-15 04:35:53 +08:00
Update README.md
This commit is contained in:
parent
1522c8f912
commit
e5fe3e1dc5
@ -27,7 +27,7 @@ base_model:
|
|||||||
<h1>A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone</h1>
|
<h1>A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone</h1>
|
||||||
|
|
||||||
## MiniCPM-o 2.6 int4
|
## MiniCPM-o 2.6 int4
|
||||||
This is the int4 quantized version of [**MiniCPM-o 2.6**](https://huggingface.co/openbmb/MiniCPM-o-2_6).
|
This is the int4 quantized version of [**MiniCPM-o 2.6**](https://github.com/RanchiZhao/AutoGPTQ).
|
||||||
Running with int4 version would use lower GPU memory (about 9GB).
|
Running with int4 version would use lower GPU memory (about 9GB).
|
||||||
|
|
||||||
### Prepare code and install AutoGPTQ
|
### Prepare code and install AutoGPTQ
|
||||||
@ -35,7 +35,7 @@ Running with int4 version would use lower GPU memory (about 9GB).
|
|||||||
We are submitting PR to officially support minicpm-o 2.6 inference
|
We are submitting PR to officially support minicpm-o 2.6 inference
|
||||||
|
|
||||||
```python
|
```python
|
||||||
git clone https://github.com/OpenBMB/AutoGPTQ.git && cd AutoGPTQ
|
git clone https://github.com/RanchiZhao/AutoGPTQ.git && cd AutoGPTQ
|
||||||
git checkout minicpmo
|
git checkout minicpmo
|
||||||
|
|
||||||
# install AutoGPTQ
|
# install AutoGPTQ
|
||||||
|
File diff suppressed because it is too large
Load Diff
Loading…
x
Reference in New Issue
Block a user