From 0a62dd7a7ee9d2a0e4d311c13faec688c26c654b Mon Sep 17 00:00:00 2001 From: Jin Hai Date: Fri, 29 Nov 2024 14:50:45 +0800 Subject: [PATCH] Update document (#3746) ### What problem does this PR solve? Fix description on local LLM deployment case ### Type of change - [x] Documentation Update --------- Signed-off-by: jinhai Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com> --- docs/guides/deploy_local_llm.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/guides/deploy_local_llm.mdx b/docs/guides/deploy_local_llm.mdx index 76c8543c6..1c7b856d4 100644 --- a/docs/guides/deploy_local_llm.mdx +++ b/docs/guides/deploy_local_llm.mdx @@ -74,9 +74,9 @@ In the popup window, complete basic settings for Ollama: 4. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model. :::caution NOTE +- If RAGFlow is in Docker and Ollama runs on the same host machine, use `http://host.docker.internal:11434` as base URL. - If your Ollama and RAGFlow run on the same machine, use `http://localhost:11434` as base URL. -- If your Ollama and RAGFlow run on the same machine and Ollama is in Docker, use `http://host.docker.internal:11434` as base URL. -- If your Ollama runs on a different machine from RAGFlow, use `http://:11434` as base URL. +- If your Ollama runs on a different machine from RAGFlow, use `http://:11434` as base URL. ::: :::danger WARNING