From a0c1d83ddc631790623c2206408cf2a46dc31b28 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=BB=84=E8=85=BE?= <101850389+hangters@users.noreply.github.com> Date: Fri, 19 Jul 2024 18:37:28 +0800 Subject: [PATCH] update quickstart and llm_api_key_setup document (#1615) ### What problem does this PR solve? update quickstart and llm_api_key_setup document ### Type of change - [x] Documentation Update --------- Co-authored-by: Zhedong Cen --- docs/guides/llm_api_key_setup.md | 31 +++++++++++++++++-------------- docs/quickstart.mdx | 31 +++++++++++++++++-------------- 2 files changed, 34 insertions(+), 28 deletions(-) diff --git a/docs/guides/llm_api_key_setup.md b/docs/guides/llm_api_key_setup.md index 58a7f80a7..46dfcc868 100644 --- a/docs/guides/llm_api_key_setup.md +++ b/docs/guides/llm_api_key_setup.md @@ -11,22 +11,25 @@ An API key is required for RAGFlow to interact with an online AI model. This gui For now, RAGFlow supports the following online LLMs. Click the corresponding link to apply for your API key. Most LLM providers grant newly-created accounts trial credit, which will expire in a couple of months, or a promotional amount of free quota. -- [OpenAI](https://platform.openai.com/login?launch), -- Azure-OpenAI, -- Gemini, -- Groq, -- Mistral, -- Bedrock, -- [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model), -- [ZHIPU-AI](https://open.bigmodel.cn/), -- MiniMax -- [Moonshot](https://platform.moonshot.cn/docs), -- [DeepSeek](https://platform.deepseek.com/api-docs/), -- [Baichuan](https://www.baichuan-ai.com/home), -- [VolcEngine](https://www.volcengine.com/docs/82379). +- [OpenAI](https://platform.openai.com/login?launch) +- [Azure-OpenAI](https://ai.azure.com/) +- [Gemini](https://aistudio.google.com/) +- [Groq](https://console.groq.com/) +- [Mistral](https://mistral.ai/) +- [Bedrock](https://aws.amazon.com/cn/bedrock/) +- [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model) +- [ZHIPU-AI](https://open.bigmodel.cn/) +- [MiniMax](https://platform.minimaxi.com/) +- [Moonshot](https://platform.moonshot.cn/docs) +- [DeepSeek](https://platform.deepseek.com/api-docs/) +- [Baichuan](https://www.baichuan-ai.com/home) +- [VolcEngine](https://www.volcengine.com/docs/82379) +- [Jina](https://jina.ai/reader/) +- [OpenRouter](https://openrouter.ai/) +- [StepFun](https://platform.stepfun.com/) :::note -If you find your online LLM is not on the list, don't feel disheartened. The list is expanding, and you can [file a feature request](https://github.com/infiniflow/ragflow/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) with us! Alternatively, if you have customized or locally-deployed models, you can [bind them to RAGFlow using Ollama or Xinference](./deploy_local_llm.md). +If you find your online LLM is not on the list, don't feel disheartened. The list is expanding, and you can [file a feature request](https://github.com/infiniflow/ragflow/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) with us! Alternatively, if you have customized or locally-deployed models, you can [bind them to RAGFlow using Ollama, Xinferenc, or LocalAI](./deploy_local_llm.md). ::: ## Configure your API key diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx index c1d3d897d..9a440601e 100644 --- a/docs/quickstart.mdx +++ b/docs/quickstart.mdx @@ -176,22 +176,25 @@ With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (* RAGFlow is a RAG engine, and it needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. For now, RAGFlow supports the following LLMs, and the list is expanding: -- OpenAI -- Azure-OpenAI -- Gemini -- Groq -- Mistral -- Bedrock -- Tongyi-Qianwen -- ZHIPU-AI -- MiniMax -- Moonshot -- DeepSeek-V2 -- Baichuan -- VolcEngine +- [OpenAI](https://platform.openai.com/login?launch) +- [Azure-OpenAI](https://ai.azure.com/) +- [Gemini](https://aistudio.google.com/) +- [Groq](https://console.groq.com/) +- [Mistral](https://mistral.ai/) +- [Bedrock](https://aws.amazon.com/cn/bedrock/) +- [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model) +- [ZHIPU-AI](https://open.bigmodel.cn/) +- [MiniMax](https://platform.minimaxi.com/) +- [Moonshot](https://platform.moonshot.cn/docs) +- [DeepSeek](https://platform.deepseek.com/api-docs/) +- [Baichuan](https://www.baichuan-ai.com/home) +- [VolcEngine](https://www.volcengine.com/docs/82379) +- [Jina](https://jina.ai/reader/) +- [OpenRouter](https://openrouter.ai/) +- [StepFun](https://platform.stepfun.com/) :::note -RAGFlow also supports deploying LLMs locally using Ollama or Xinference, but this part is not covered in this quick start guide. +RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalAI, but this part is not covered in this quick start guide. ::: To add and configure an LLM: