diff --git a/docs/guides/chat/set_chat_variables.md b/docs/guides/chat/set_chat_variables.md index d850748ef..a4364b204 100644 --- a/docs/guides/chat/set_chat_variables.md +++ b/docs/guides/chat/set_chat_variables.md @@ -30,7 +30,7 @@ In the **Variable** section, you add, remove, or update variables. `{knowledge}` is the system's reserved variable, representing the chunks retrieved from the knowledge base(s) specified by **Knowledge bases** under the **Assistant settings** tab. If your chat assistant is associated with certain knowledge bases, you can keep it as is. :::info NOTE -It does not currently make a difference whether you set `{knowledge}` to optional or mandatory, but note that this design will be updated at a later point. +It currently makes no difference whether `{knowledge}` is set as optional or mandatory, but please note this design will be updated in due course. ::: From v0.17.0 onward, you can start an AI chat without specifying knowledge bases. In this case, we recommend removing the `{knowledge}` variable to prevent unnecessary reference and keeping the **Empty response** field empty to avoid errors. diff --git a/docs/guides/dataset/best_practices/accelerate_doc_indexing.mdx b/docs/guides/dataset/best_practices/accelerate_doc_indexing.mdx index 7765d8b4a..bc0dde11b 100644 --- a/docs/guides/dataset/best_practices/accelerate_doc_indexing.mdx +++ b/docs/guides/dataset/best_practices/accelerate_doc_indexing.mdx @@ -16,4 +16,4 @@ Please note that some of your settings may consume a significant amount of time. - On the configuration page of your knowledge base, switch off **Use RAPTOR to enhance retrieval**. - Extracting knowledge graph (GraphRAG) is time-consuming. - Disable **Auto-keyword** and **Auto-question** on the configuration page of your knowledge base, as both depend on the LLM. -- **v0.17.0+:** If your document is plain text PDF and does not require GPU-intensive processes like OCR (Optical Character Recognition), TSR (Table Structure Recognition), or DLA (Document Layout Analysis), you can choose **Naive** over **DeepDoc** or other time-consuming large model options in the **Document parser** dropdown. This will substantially reduce document parsing time. +- **v0.17.0+:** If all PDFs in your knowledge base are plain text and do not require GPU-intensive processes like OCR (Optical Character Recognition), TSR (Table Structure Recognition), or DLA (Document Layout Analysis), you can choose **Naive** over **DeepDoc** or other time-consuming large model options in the **Document parser** dropdown. This will substantially reduce document parsing time. diff --git a/docs/guides/dataset/enable_raptor.md b/docs/guides/dataset/enable_raptor.md index 660233a38..701b60113 100644 --- a/docs/guides/dataset/enable_raptor.md +++ b/docs/guides/dataset/enable_raptor.md @@ -47,7 +47,7 @@ The RAPTOR feature is disabled by default. To enable it, manually switch on the ### Prompt -The following prompt will be applied recursively for cluster summarization, with `{cluster_content}` serving as an internal parameter. We recommend that you keep it as-is for now. The design will be updated at a later point. +The following prompt will be applied recursively for cluster summarization, with `{cluster_content}` serving as an internal parameter. We recommend that you keep it as-is for now. The design will be updated in due course. ``` Please summarize the following paragraphs... Paragraphs as following: diff --git a/docs/guides/dataset/select_pdf_parser.md b/docs/guides/dataset/select_pdf_parser.md index c3b632e12..60625a62c 100644 --- a/docs/guides/dataset/select_pdf_parser.md +++ b/docs/guides/dataset/select_pdf_parser.md @@ -1,5 +1,5 @@ --- -sidebar_position: 0 +sidebar_position: 2 slug: /select_pdf_parser --- @@ -23,7 +23,7 @@ RAGFlow isn't one-size-fits-all. It is built for flexibility and supports deeper - **Laws** - **Presentation** - **One** -- To use a third-party visual model for parsing PDFs, ensure you have set a default image2txt model under **Set default models** on the **Model providers** page. +- To use a third-party visual model for parsing PDFs, ensure you have set a default img2txt model under **Set default models** on the **Model providers** page. ## Procedure @@ -33,9 +33,9 @@ RAGFlow isn't one-size-fits-all. It is built for flexibility and supports deeper 2. Select the option that works best with your scenario: -- DeepDoc: (Default) The default visual model for OCR, TSR, and DLR tasks. -- Naive: Skip OCR, TSR, and DLR tasks if *all* your PDFs are plain text. -- A third-party visual model provided by a specific model provider. + - DeepDoc: (Default) The default visual model for OCR, TSR, and DLR tasks, which is time-consuming. + - Naive: Skip OCR, TSR, and DLR tasks if *all* your PDFs are plain text. + - A third-party visual model provided by a specific model provider. :::caution WARNING Third-party visual models are marked **Experimental**, because we have not fully tested these models for the aforementioned data extraction tasks. diff --git a/docs/guides/dataset/set_metadata.md b/docs/guides/dataset/set_metadata.md index d012a2738..77be1e4c5 100644 --- a/docs/guides/dataset/set_metadata.md +++ b/docs/guides/dataset/set_metadata.md @@ -1,5 +1,5 @@ --- -sidebar_position: 2 +sidebar_position: 0 slug: /set_metada --- @@ -19,4 +19,10 @@ For example, if you have a dataset of HTML files and want the LLM to cite the so Ensure that your metadata is in JSON format; otherwise, your updates will not be applied. ::: -![Image](https://github.com/user-attachments/assets/379cf2c5-4e37-4b79-8aeb-53bf8e01d326) \ No newline at end of file +![Image](https://github.com/user-attachments/assets/379cf2c5-4e37-4b79-8aeb-53bf8e01d326) + +## Frequently asked questions + +### Can I set metadata for multiple documents at once? + +No, RAGFlow does not support batch metadata setting. If you still consider this feature essential, please [raise an issue](https://github.com/infiniflow/ragflow/issues) explaining your use case and its importance. \ No newline at end of file diff --git a/docs/guides/models/llm_api_key_setup.md b/docs/guides/models/llm_api_key_setup.md index d42d4de35..46fd6c8a0 100644 --- a/docs/guides/models/llm_api_key_setup.md +++ b/docs/guides/models/llm_api_key_setup.md @@ -49,6 +49,6 @@ After logging into RAGFlow, you can *only* configure API Key on the **Model prov 5. Click **OK** to confirm your changes. :::note -To update an existing model API key at a later point: +To update an existing model API key: ![update api key](https://github.com/infiniflow/ragflow/assets/93570324/0bfba679-33f7-4f6b-9ed6-f0e6e4b228ad) ::: \ No newline at end of file diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx index 360761316..5e760904f 100644 --- a/docs/quickstart.mdx +++ b/docs/quickstart.mdx @@ -258,8 +258,6 @@ To add and configure an LLM: ![add llm](https://github.com/infiniflow/ragflow/assets/93570324/10635088-028b-4b3d-add9-5c5a6e626814) - > Each RAGFlow account is able to use **text-embedding-v2** for free, an embedding model of Tongyi-Qianwen. This is why you can see Tongyi-Qianwen in the **Added models** list. And you may need to update your Tongyi-Qianwen API key at a later point. - 2. Click on the desired LLM and update the API key accordingly (DeepSeek-V2 in this case): ![update api key](https://github.com/infiniflow/ragflow/assets/93570324/4e5e13ef-a98d-42e6-bcb1-0c6045fc1666) diff --git a/docs/release_notes.md b/docs/release_notes.md index 0d0c71cce..e3c8908c8 100644 --- a/docs/release_notes.md +++ b/docs/release_notes.md @@ -117,7 +117,7 @@ Released on March 3, 2025. - AI chat: Leverages Tavily-based web search to enhance contexts in agentic reasoning. To activate this, enter the correct Tavily API key under the **Assistant settings** tab of your chat assistant dialogue. - AI chat: Supports starting a chat without specifying knowledge bases. - AI chat: HTML files can also be previewed and referenced, in addition to PDF files. -- Dataset: Adds a **PDF parser**, aka **Document parser**, dropdown menu to dataset configurations. This includes a DeepDoc model option, which is time-consuming, a much faster **naive** option (plain text), which skips DLA (Document Layout Analysis), OCR (Optical Character Recognition), and TSR (Table Structure Recognition) tasks, and several currently *experimental* large model options. +- Dataset: Adds a **PDF parser**, aka **Document parser**, dropdown menu to dataset configurations. This includes a DeepDoc model option, which is time-consuming, a much faster **naive** option (plain text), which skips DLA (Document Layout Analysis), OCR (Optical Character Recognition), and TSR (Table Structure Recognition) tasks, and several currently *experimental* large model options. See [here](./guides/dataset/select_pdf_parser.md). - Agent component: **(x)** or a forward slash `/` can be used to insert available keys (variables) in the system prompt field of the **Generate** or **Template** component. - Object storage: Supports using Aliyun OSS (Object Storage Service) as a file storage option. - Models: Updates the supported model list for Tongyi-Qianwen (Qwen), adding DeepSeek-specific models; adds ModelScope as a model provider.