mirror of
https://git.mirrors.martin98.com/https://github.com/infiniflow/ragflow.git
synced 2025-08-11 03:29:00 +08:00
Update document (#3746)
### What problem does this PR solve? Fix description on local LLM deployment case ### Type of change - [x] Documentation Update --------- Signed-off-by: jinhai <haijin.chn@gmail.com> Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
This commit is contained in:
parent
06a21d2031
commit
0a62dd7a7e
@ -74,9 +74,9 @@ In the popup window, complete basic settings for Ollama:
|
||||
4. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
|
||||
|
||||
:::caution NOTE
|
||||
- If RAGFlow is in Docker and Ollama runs on the same host machine, use `http://host.docker.internal:11434` as base URL.
|
||||
- If your Ollama and RAGFlow run on the same machine, use `http://localhost:11434` as base URL.
|
||||
- If your Ollama and RAGFlow run on the same machine and Ollama is in Docker, use `http://host.docker.internal:11434` as base URL.
|
||||
- If your Ollama runs on a different machine from RAGFlow, use `http://<IP_OF_OLLAMA_MACHINE>:11434` as base URL.
|
||||
- If your Ollama runs on a different machine from RAGFlow, use `http://<IP_OF_OLLAMA_MACHINE>:11434` as base URL.
|
||||
:::
|
||||
|
||||
:::danger WARNING
|
||||
|
Loading…
x
Reference in New Issue
Block a user