mirror of
https://git.mirrors.martin98.com/https://github.com/infiniflow/ragflow.git
synced 2025-04-19 04:30:01 +08:00
Miscellaneous editorial updates (#6805)
### What problem does this PR solve? ### Type of change - [x] Documentation Update
This commit is contained in:
parent
c6b26a3159
commit
5a8c479ff3
@ -74,6 +74,8 @@ The [.env](https://github.com/infiniflow/ragflow/blob/main/docker/.env) file con
|
|||||||
|
|
||||||
### MinIO
|
### MinIO
|
||||||
|
|
||||||
|
RAGFlow utilizes MinIO as its object storage solution, leveraging its scalability to store and manage all uploaded files.
|
||||||
|
|
||||||
- `MINIO_CONSOLE_PORT`
|
- `MINIO_CONSOLE_PORT`
|
||||||
The port used to expose the MinIO console interface to the host machine, allowing **external** access to the web-based console running inside the Docker container. Defaults to `9001`
|
The port used to expose the MinIO console interface to the host machine, allowing **external** access to the web-based console running inside the Docker container. Defaults to `9001`
|
||||||
- `MINIO_PORT`
|
- `MINIO_PORT`
|
||||||
|
@ -108,7 +108,7 @@ Yes, we do.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Is it possible to share dialogue through URL?
|
### Do you support sharing dialogue through URL?
|
||||||
|
|
||||||
No, this feature is not supported.
|
No, this feature is not supported.
|
||||||
|
|
||||||
@ -449,3 +449,7 @@ To switch your document engine from Elasticsearch to [Infinity](https://github.c
|
|||||||
```bash
|
```bash
|
||||||
$ docker compose -f docker-compose.yml up -d
|
$ docker compose -f docker-compose.yml up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Where are my uploaded files stored in RAGFlow's image?
|
||||||
|
|
||||||
|
All uploaded files are stored in Minio, RAGFlow's object storage solution. For instance, if you upload your file directly to a knowledge base, it is located at `<knowledgebase_id>/filename`.
|
||||||
|
@ -31,10 +31,6 @@ An opening greeting is the agent's first message to the user. It can be a welcom
|
|||||||
|
|
||||||
You can set global variables within the **Begin** component, which can be either required or optional. Once established, users will need to provide values for these variables when interacting or chatting with the agent. Click **+ Add variable** to add a global variable, each with the following attributes:
|
You can set global variables within the **Begin** component, which can be either required or optional. Once established, users will need to provide values for these variables when interacting or chatting with the agent. Click **+ Add variable** to add a global variable, each with the following attributes:
|
||||||
|
|
||||||
:::caution WARNING
|
|
||||||
If your agent's **Begin** component takes a variable, you *cannot* embed it into a webpage.
|
|
||||||
:::
|
|
||||||
|
|
||||||
- **Key**: *Required*
|
- **Key**: *Required*
|
||||||
The unique variable name.
|
The unique variable name.
|
||||||
- **Name**: *Required*
|
- **Name**: *Required*
|
||||||
@ -50,8 +46,15 @@ If your agent's **Begin** component takes a variable, you *cannot* embed it into
|
|||||||
- **boolean**: Requires the user to toggle between on and off.
|
- **boolean**: Requires the user to toggle between on and off.
|
||||||
- **Optional**: A toggle indicating whether the variable is optional.
|
- **Optional**: A toggle indicating whether the variable is optional.
|
||||||
|
|
||||||
:::danger IMPORTAN
|
:::tip NOTE
|
||||||
If you set the key type as **file**, ensure the token count of the uploaded file does not exceed your model provider's maximum token limit; otherwise, the plain text in your file will be truncated and incomplete.
|
To pass in parameters from a client, call:
|
||||||
|
- HTTP method [Converse with agent](../../references/http_api_reference.md#converse-with-agent), or
|
||||||
|
- Python method [Converse with agent](../../referencespython_api_reference.md#converse-with-agent).
|
||||||
|
:::
|
||||||
|
|
||||||
|
:::danger IMPORTANT
|
||||||
|
- If you set the key type as **file**, ensure the token count of the uploaded file does not exceed your model provider's maximum token limit; otherwise, the plain text in your file will be truncated and incomplete.
|
||||||
|
- If your agent's **Begin** component takes a variable, you *cannot* embed it into a webpage.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
@ -44,6 +44,9 @@ You start an AI conversation by creating an assistant.
|
|||||||
- If **Rerank model** is selected, the hybrid score system uses keyword similarity and reranker score, and the default weight assigned to the reranker score is 1-0.7=0.3.
|
- If **Rerank model** is selected, the hybrid score system uses keyword similarity and reranker score, and the default weight assigned to the reranker score is 1-0.7=0.3.
|
||||||
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
|
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
|
||||||
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
|
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
|
||||||
|
- As of v0.17.2, if you add custom variables here, the only way you can pass in their values is to call:
|
||||||
|
- HTTP method [Converse with chat assistant](../../references/http_api_reference.md#converse-with-chat-assistant), or
|
||||||
|
- Python method [Converse with chat assistant](../../references/python_api_reference.md#converse-with-chat-assistant).
|
||||||
|
|
||||||
4. Update **Model Setting**:
|
4. Update **Model Setting**:
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ This user guide does not intend to cover much of the installation or configurati
|
|||||||
- For a complete list of supported models and variants, see the [Ollama model library](https://ollama.com/library).
|
- For a complete list of supported models and variants, see the [Ollama model library](https://ollama.com/library).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### 1. Deploy ollama using docker
|
### 1. Deploy Ollama using Docker
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo docker run --name ollama -p 11434:11434 ollama/ollama
|
sudo docker run --name ollama -p 11434:11434 ollama/ollama
|
||||||
@ -36,14 +36,14 @@ time=2024-12-02T02:20:21.360Z level=INFO source=routes.go:1248 msg="Listening on
|
|||||||
time=2024-12-02T02:20:21.360Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
|
time=2024-12-02T02:20:21.360Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
|
||||||
```
|
```
|
||||||
|
|
||||||
Ensure ollama is listening on all IP address:
|
Ensure Ollama is listening on all IP address:
|
||||||
```bash
|
```bash
|
||||||
sudo ss -tunlp | grep 11434
|
sudo ss -tunlp | grep 11434
|
||||||
tcp LISTEN 0 4096 0.0.0.0:11434 0.0.0.0:* users:(("docker-proxy",pid=794507,fd=4))
|
tcp LISTEN 0 4096 0.0.0.0:11434 0.0.0.0:* users:(("docker-proxy",pid=794507,fd=4))
|
||||||
tcp LISTEN 0 4096 [::]:11434 [::]:* users:(("docker-proxy",pid=794513,fd=4))
|
tcp LISTEN 0 4096 [::]:11434 [::]:* users:(("docker-proxy",pid=794513,fd=4))
|
||||||
```
|
```
|
||||||
|
|
||||||
Pull models as you need. It's recommended to start with `llama3.2` (a 3B chat model) and `bge-m3` (a 567M embedding model):
|
Pull models as you need. We recommend that you start with `llama3.2` (a 3B chat model) and `bge-m3` (a 567M embedding model):
|
||||||
```bash
|
```bash
|
||||||
sudo docker exec ollama ollama pull llama3.2
|
sudo docker exec ollama ollama pull llama3.2
|
||||||
pulling dde5aa3fc5ff... 100% ▕████████████████▏ 2.0 GB
|
pulling dde5aa3fc5ff... 100% ▕████████████████▏ 2.0 GB
|
||||||
@ -58,20 +58,20 @@ success
|
|||||||
|
|
||||||
### 2. Ensure Ollama is accessible
|
### 2. Ensure Ollama is accessible
|
||||||
|
|
||||||
If RAGFlow runs in Docker and Ollama runs on the same host machine, check if ollama is accessible from inside the RAGFlow container:
|
- If RAGFlow runs in Docker and Ollama runs on the same host machine, check if Ollama is accessible from inside the RAGFlow container:
|
||||||
```bash
|
```bash
|
||||||
sudo docker exec -it ragflow-server bash
|
sudo docker exec -it ragflow-server bash
|
||||||
root@8136b8c3e914:/ragflow# curl http://host.docker.internal:11434/
|
curl http://host.docker.internal:11434/
|
||||||
Ollama is running
|
Ollama is running
|
||||||
```
|
```
|
||||||
|
|
||||||
If RAGFlow runs from source code and Ollama runs on the same host machine, check if ollama is accessible from RAGFlow host machine:
|
- If RAGFlow is launched from source code and Ollama runs on the same host machine as RAGFlow, check if Ollama is accessible from RAGFlow's host machine:
|
||||||
```bash
|
```bash
|
||||||
curl http://localhost:11434/
|
curl http://localhost:11434/
|
||||||
Ollama is running
|
Ollama is running
|
||||||
```
|
```
|
||||||
|
|
||||||
If RAGFlow and Ollama run on different machines, check if ollama is accessible from RAGFlow host machine:
|
- If RAGFlow and Ollama run on different machines, check if Ollama is accessible from RAGFlow's host machine:
|
||||||
```bash
|
```bash
|
||||||
curl http://${IP_OF_OLLAMA_MACHINE}:11434/
|
curl http://${IP_OF_OLLAMA_MACHINE}:11434/
|
||||||
Ollama is running
|
Ollama is running
|
||||||
@ -88,8 +88,8 @@ In RAGFlow, click on your logo on the top right of the page **>** **Model provid
|
|||||||
|
|
||||||
In the popup window, complete basic settings for Ollama:
|
In the popup window, complete basic settings for Ollama:
|
||||||
|
|
||||||
1. Ensure model name and type match those been pulled at step 1, For example, (`llama3.2`, `chat`), (`bge-m3`, `embedding`).
|
1. Ensure that your model name and type match those been pulled at step 1 (Deploy Ollama using Docker). For example, (`llama3.2` and `chat`) or (`bge-m3` and `embedding`).
|
||||||
2. Ensure that the base URL match which been determined at step 2.
|
2. Ensure that the base URL match the URL determined at step 2 (Ensure Ollama is accessible).
|
||||||
3. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
|
3. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
|
||||||
|
|
||||||
|
|
||||||
@ -104,15 +104,12 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
|
|||||||
|
|
||||||
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
|
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
|
||||||
|
|
||||||
*You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
|
- *You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
|
||||||
|
- _If your local model is an embedding model, you should find it under **Embedding model**._
|
||||||
> If your local model is an embedding model, you should find your local model under **Embedding model**.
|
|
||||||
|
|
||||||
### 7. Update Chat Configuration
|
### 7. Update Chat Configuration
|
||||||
|
|
||||||
Update your chat model accordingly in **Chat Configuration**:
|
Update your model(s) accordingly in **Chat Configuration**.
|
||||||
|
|
||||||
> If your local model is an embedding model, update it on the configuration page of your knowledge base.
|
|
||||||
|
|
||||||
## Deploy a local model using Xinference
|
## Deploy a local model using Xinference
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ You cannot invite users to a team unless you are its owner.
|
|||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
1. Ensure that your Email address that received the team invitation is associated with a RAGFlow user account.
|
1. Ensure that your Email address that received the team invitation is associated with a RAGFlow user account.
|
||||||
2. To view and update the team owner's shared knowledge base, The team owner must set a knowledge base's **Permissions** to **Team**.
|
2. To view and update the team owner's shared knowledge base, the team owner must set a knowledge base's **Permissions** to **Team**.
|
||||||
|
|
||||||
## Accept or decline team invite
|
## Accept or decline team invite
|
||||||
|
|
||||||
@ -39,4 +39,4 @@ _After accepting the team invite, you should be able to view and update the team
|
|||||||
|
|
||||||
## Leave a joined team
|
## Leave a joined team
|
||||||
|
|
||||||

|

|
@ -3,7 +3,7 @@ sidebar_position: 1
|
|||||||
slug: /manage_team_members
|
slug: /manage_team_members
|
||||||
---
|
---
|
||||||
|
|
||||||
# Team
|
# Manage team members
|
||||||
|
|
||||||
Invite or remove team members.
|
Invite or remove team members.
|
||||||
|
|
||||||
|
@ -2679,9 +2679,13 @@ Asks a specified agent a question to start an AI-powered conversation.
|
|||||||
- `"sync_dsl"`: `boolean` (optional)
|
- `"sync_dsl"`: `boolean` (optional)
|
||||||
- other parameters: `string`
|
- other parameters: `string`
|
||||||
|
|
||||||
|
:::info IMPORTANT
|
||||||
|
You can include custom parameters in the request body, but first ensure they are defined in the [Begin](../guides/agent/agent_component_reference/begin.mdx) agent component.
|
||||||
|
:::
|
||||||
|
|
||||||
##### Request example
|
##### Request example
|
||||||
|
|
||||||
If the **Begin** component does not take parameters, the following code will create a session.
|
- If the **Begin** component does not take parameters, the following code will create a session.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl --request POST \
|
curl --request POST \
|
||||||
@ -2693,7 +2697,7 @@ curl --request POST \
|
|||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
If the **Begin** component takes parameters, the following code will create a session.
|
- If the **Begin** component takes parameters, the following code will create a session.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl --request POST \
|
curl --request POST \
|
||||||
|
@ -380,7 +380,7 @@ export default {
|
|||||||
addTag: 'Tag hinzufügen',
|
addTag: 'Tag hinzufügen',
|
||||||
useGraphRag: 'Wissensgraph extrahieren',
|
useGraphRag: 'Wissensgraph extrahieren',
|
||||||
useGraphRagTip:
|
useGraphRagTip:
|
||||||
'Konstruieren Sie einen Wissensgraphen über extrahierte Datei-Chunks, um mehrschrittige Frage-Antwort-Prozesse zu verbessern.',
|
'Erstellen Sie einen Wissensgraph über Dateiabschnitte der aktuellen Wissensbasis, um die Beantwortung von Fragen mit mehreren Schritten und verschachtelter Logik zu verbessern. Weitere Informationen finden Sie unter https://ragflow.io/docs/dev/construct_knowledge_graph.',
|
||||||
graphRagMethod: 'Methode',
|
graphRagMethod: 'Methode',
|
||||||
graphRagMethodTip: `Light: (Standard) Verwendet von github.com/HKUDS/LightRAG bereitgestellte Prompts, um Entitäten und Beziehungen zu extrahieren. Diese Option verbraucht weniger Tokens, weniger Speicher und weniger Rechenressourcen.</br>
|
graphRagMethodTip: `Light: (Standard) Verwendet von github.com/HKUDS/LightRAG bereitgestellte Prompts, um Entitäten und Beziehungen zu extrahieren. Diese Option verbraucht weniger Tokens, weniger Speicher und weniger Rechenressourcen.</br>
|
||||||
General: Verwendet von github.com/microsoft/graphrag bereitgestellte Prompts, um Entitäten und Beziehungen zu extrahieren`,
|
General: Verwendet von github.com/microsoft/graphrag bereitgestellte Prompts, um Entitäten und Beziehungen zu extrahieren`,
|
||||||
@ -619,7 +619,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'Wenn Ihr API-Schlüssel von OpenAI stammt, ignorieren Sie dies. Andere Zwischenanbieter geben diese Basis-URL mit dem API-Schlüssel an.',
|
'Wenn Ihr API-Schlüssel von OpenAI stammt, ignorieren Sie dies. Andere Zwischenanbieter geben diese Basis-URL mit dem API-Schlüssel an.',
|
||||||
modify: 'Ändern',
|
modify: 'Ändern',
|
||||||
systemModelSettings: 'Systemmodelleinstellungen',
|
systemModelSettings: 'Standardmodelle festlegen',
|
||||||
chatModel: 'Chat-Modell',
|
chatModel: 'Chat-Modell',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'Das Standard-Chat-LLM, das alle neu erstellten Wissensdatenbanken verwenden werden.',
|
'Das Standard-Chat-LLM, das alle neu erstellten Wissensdatenbanken verwenden werden.',
|
||||||
|
@ -370,7 +370,7 @@ This auto-tagging feature enhances retrieval by adding another layer of domain-s
|
|||||||
addTag: 'Add tag',
|
addTag: 'Add tag',
|
||||||
useGraphRag: 'Extract knowledge graph',
|
useGraphRag: 'Extract knowledge graph',
|
||||||
useGraphRagTip:
|
useGraphRagTip:
|
||||||
'Construct a knowledge graph over extracted file chunks to enhance multi-hop question answering.',
|
'Construct a knowledge graph over file chunks of the current knowledge base to enhance multi-hop question-answering involving nested logic. See https://ragflow.io/docs/dev/construct_knowledge_graph for details.',
|
||||||
graphRagMethod: 'Method',
|
graphRagMethod: 'Method',
|
||||||
graphRagMethodTip: `Light: (Default) Use prompts provided by github.com/HKUDS/LightRAG to extract entities and relationships. This option consumes fewer tokens, less memory, and fewer computational resources.</br>
|
graphRagMethodTip: `Light: (Default) Use prompts provided by github.com/HKUDS/LightRAG to extract entities and relationships. This option consumes fewer tokens, less memory, and fewer computational resources.</br>
|
||||||
General: Use prompts provided by github.com/microsoft/graphrag to extract entities and relationships`,
|
General: Use prompts provided by github.com/microsoft/graphrag to extract entities and relationships`,
|
||||||
@ -594,7 +594,7 @@ This auto-tagging feature enhances retrieval by adding another layer of domain-s
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'If your API key is from OpenAI, just ignore it. Any other intermediate providers will give this base url with the API key.',
|
'If your API key is from OpenAI, just ignore it. Any other intermediate providers will give this base url with the API key.',
|
||||||
modify: 'Modify',
|
modify: 'Modify',
|
||||||
systemModelSettings: 'System Model Settings',
|
systemModelSettings: 'Set default models',
|
||||||
chatModel: 'Chat model',
|
chatModel: 'Chat model',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'The default chat model for each newly created knowledge base.',
|
'The default chat model for each newly created knowledge base.',
|
||||||
|
@ -339,7 +339,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'Si tu clave API es de OpenAI, ignora esto. Cualquier otro proveedor intermedio proporcionará esta URL base junto con la clave API.',
|
'Si tu clave API es de OpenAI, ignora esto. Cualquier otro proveedor intermedio proporcionará esta URL base junto con la clave API.',
|
||||||
modify: 'Modificar',
|
modify: 'Modificar',
|
||||||
systemModelSettings: 'Configuración del modelo del sistema',
|
systemModelSettings: 'Establecer modelos predeterminados',
|
||||||
chatModel: 'Modelo de chat',
|
chatModel: 'Modelo de chat',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'El modelo LLM de chat predeterminado que todas las nuevas bases de conocimiento utilizarán.',
|
'El modelo LLM de chat predeterminado que todas las nuevas bases de conocimiento utilizarán.',
|
||||||
|
@ -510,7 +510,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'Jika kunci API Anda berasal dari OpenAI, abaikan saja. Penyedia perantara lainnya akan memberikan base url ini dengan kunci API.',
|
'Jika kunci API Anda berasal dari OpenAI, abaikan saja. Penyedia perantara lainnya akan memberikan base url ini dengan kunci API.',
|
||||||
modify: 'Ubah',
|
modify: 'Ubah',
|
||||||
systemModelSettings: 'Pengaturan Model Sistem',
|
systemModelSettings: 'Tetapkan model default',
|
||||||
chatModel: 'Model Obrolan',
|
chatModel: 'Model Obrolan',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'Model LLM obrolan default yang akan digunakan semua basis pengetahuan baru yang dibuat.',
|
'Model LLM obrolan default yang akan digunakan semua basis pengetahuan baru yang dibuat.',
|
||||||
|
@ -505,7 +505,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'APIキーがOpenAIからのものであれば無視してください。他の中間プロバイダーはAPIキーと共にこのベースURLを提供します。',
|
'APIキーがOpenAIからのものであれば無視してください。他の中間プロバイダーはAPIキーと共にこのベースURLを提供します。',
|
||||||
modify: '変更',
|
modify: '変更',
|
||||||
systemModelSettings: 'システムモデル設定',
|
systemModelSettings: 'デフォルトモデルを設定',
|
||||||
chatModel: 'チャットモデル',
|
chatModel: 'チャットモデル',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'新しく作成されたナレッジベースが使用するデフォルトのチャットLLM。',
|
'新しく作成されたナレッジベースが使用するデフォルトのチャットLLM。',
|
||||||
|
@ -499,7 +499,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'Se sua chave da API for do OpenAI, ignore isso. Outros provedores intermediários fornecerão essa URL base com a chave da API.',
|
'Se sua chave da API for do OpenAI, ignore isso. Outros provedores intermediários fornecerão essa URL base com a chave da API.',
|
||||||
modify: 'Modificar',
|
modify: 'Modificar',
|
||||||
systemModelSettings: 'Configurações do Modelo do Sistema',
|
systemModelSettings: 'Definir modelos padrão',
|
||||||
chatModel: 'Modelo de chat',
|
chatModel: 'Modelo de chat',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'O modelo LLM padrão que todos os novos bancos de conhecimento usarão.',
|
'O modelo LLM padrão que todos os novos bancos de conhecimento usarão.',
|
||||||
|
@ -341,7 +341,7 @@ export default {
|
|||||||
graphRagMethodTip: `Light: Câu lệnh trích xuất thực thể và quan hệ này được lấy từ GitHub - HKUDS/LightRAG: "LightRAG: Tạo sinh tăng cường truy xuất đơn giản và nhanh chóng".
|
graphRagMethodTip: `Light: Câu lệnh trích xuất thực thể và quan hệ này được lấy từ GitHub - HKUDS/LightRAG: "LightRAG: Tạo sinh tăng cường truy xuất đơn giản và nhanh chóng".
|
||||||
General: Câu lệnh trích xuất thực thể và quan hệ này được lấy từ GitHub - microsoft/graphrag: Một hệ thống Tạo sinh tăng cường truy xuất (RAG) dựa trên đồ thị theo mô-đun.`,
|
General: Câu lệnh trích xuất thực thể và quan hệ này được lấy từ GitHub - microsoft/graphrag: Một hệ thống Tạo sinh tăng cường truy xuất (RAG) dựa trên đồ thị theo mô-đun.`,
|
||||||
useGraphRagTip:
|
useGraphRagTip:
|
||||||
'Sau khi các tệp được chia thành các đoạn nhỏ, tất cả các đoạn này sẽ được sử dụng để tạo biểu đồ tri thức, từ đó hỗ trợ suy luận cho các bài toán phức tạp và nhiều bước.',
|
'Xây dựng một biểu đồ tri thức trên các đoạn tệp của cơ sở tri thức hiện tại để tăng cường khả năng trả lời câu hỏi đa bước liên quan đến logic lồng nhau. Xem https://ragflow.io/docs/dev/construct_knowledge_graph để biết thêm chi tiết.',
|
||||||
resolution: 'Hợp nhất thực thể',
|
resolution: 'Hợp nhất thực thể',
|
||||||
resolutionTip:
|
resolutionTip:
|
||||||
'Quy trình phân giải sẽ hợp nhất các thực thể có cùng ý nghĩa lại với nhau, giúp đồ thị trở nên cô đọng và chính xác hơn. Các thực thể sau đây nên được hợp nhất: President Trump, Donald Trump, Donald J. Trump, Donald John Trump.',
|
'Quy trình phân giải sẽ hợp nhất các thực thể có cùng ý nghĩa lại với nhau, giúp đồ thị trở nên cô đọng và chính xác hơn. Các thực thể sau đây nên được hợp nhất: President Trump, Donald Trump, Donald J. Trump, Donald John Trump.',
|
||||||
@ -556,7 +556,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'Nếu khóa API của bạn từ OpenAI, chỉ cần bỏ qua nó. Bất kỳ nhà cung cấp trung gian nào khác sẽ cung cấp URL cơ sở này với khóa API.',
|
'Nếu khóa API của bạn từ OpenAI, chỉ cần bỏ qua nó. Bất kỳ nhà cung cấp trung gian nào khác sẽ cung cấp URL cơ sở này với khóa API.',
|
||||||
modify: 'Sửa đổi',
|
modify: 'Sửa đổi',
|
||||||
systemModelSettings: 'Cài đặt mô hình hệ thống',
|
systemModelSettings: 'Thiết lập mô hình mặc định',
|
||||||
chatModel: 'Mô hình trò chuyện',
|
chatModel: 'Mô hình trò chuyện',
|
||||||
chatModelTip:
|
chatModelTip:
|
||||||
'LLM trò chuyện mặc định mà tất cả các cơ sở kiến thức mới tạo sẽ sử dụng.',
|
'LLM trò chuyện mặc định mà tất cả các cơ sở kiến thức mới tạo sẽ sử dụng.',
|
||||||
|
@ -359,7 +359,7 @@ export default {
|
|||||||
addTag: '增加標籤',
|
addTag: '增加標籤',
|
||||||
useGraphRag: '提取知識圖譜',
|
useGraphRag: '提取知識圖譜',
|
||||||
useGraphRagTip:
|
useGraphRagTip:
|
||||||
'文件分塊後,所有區塊將用於知識圖譜生成,這對多跳和複雜問題的推理有很大幫助。',
|
'基於知識庫內所有切好的文本塊構建知識圖譜,用以提升多跳和複雜問題回答的正確率。請注意:構建知識圖譜將消耗大量 token 和時間。詳見 https://ragflow.io/docs/dev/construct_knowledge_graph。',
|
||||||
graphRagMethod: '方法',
|
graphRagMethod: '方法',
|
||||||
graphRagMethodTip: `Light:實體和關係提取提示來自 GitHub - HKUDS/LightRAG:“LightRAG:簡單快速的檢索增強生成”<br>
|
graphRagMethodTip: `Light:實體和關係提取提示來自 GitHub - HKUDS/LightRAG:“LightRAG:簡單快速的檢索增強生成”<br>
|
||||||
一般:實體和關係擷取提示來自 GitHub - microsoft/graphrag:基於模組化圖形的檢索增強生成 (RAG) 系統,`,
|
一般:實體和關係擷取提示來自 GitHub - microsoft/graphrag:基於模組化圖形的檢索增強生成 (RAG) 系統,`,
|
||||||
@ -574,7 +574,7 @@ export default {
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'如果您的 API 密鑰來自 OpenAI,請忽略它。任何其他中間提供商都會提供帶有 API 密鑰的基本 URL。',
|
'如果您的 API 密鑰來自 OpenAI,請忽略它。任何其他中間提供商都會提供帶有 API 密鑰的基本 URL。',
|
||||||
modify: '修改',
|
modify: '修改',
|
||||||
systemModelSettings: '系統模型設置',
|
systemModelSettings: '設定預設模型',
|
||||||
chatModel: '聊天模型',
|
chatModel: '聊天模型',
|
||||||
chatModelTip: '所有新創建的知識庫都會使用默認的聊天模型。',
|
chatModelTip: '所有新創建的知識庫都會使用默認的聊天模型。',
|
||||||
ttsModel: '語音合成模型',
|
ttsModel: '語音合成模型',
|
||||||
|
@ -376,7 +376,7 @@ export default {
|
|||||||
addTag: '增加标签',
|
addTag: '增加标签',
|
||||||
useGraphRag: '提取知识图谱',
|
useGraphRag: '提取知识图谱',
|
||||||
useGraphRagTip:
|
useGraphRagTip:
|
||||||
'文件分块后,所有块将用于知识图谱生成,这对多跳和复杂问题的推理大有帮助。',
|
'基于知识库内所有切好的文本块构建知识图谱,用以提升多跳和复杂问题回答的正确率。请注意:构建知识图谱将消耗大量 token 和时间。详见 https://ragflow.io/docs/dev/construct_knowledge_graph。',
|
||||||
graphRagMethod: '方法',
|
graphRagMethod: '方法',
|
||||||
graphRagMethodTip: `Light:实体和关系提取提示来自 GitHub - HKUDS/LightRAG:“LightRAG:简单快速的检索增强生成”<br>
|
graphRagMethodTip: `Light:实体和关系提取提示来自 GitHub - HKUDS/LightRAG:“LightRAG:简单快速的检索增强生成”<br>
|
||||||
General:实体和关系提取提示来自 GitHub - microsoft/graphrag:基于图的模块化检索增强生成 (RAG) 系统`,
|
General:实体和关系提取提示来自 GitHub - microsoft/graphrag:基于图的模块化检索增强生成 (RAG) 系统`,
|
||||||
@ -591,7 +591,7 @@ General:实体和关系提取提示来自 GitHub - microsoft/graphrag:基于
|
|||||||
baseUrlTip:
|
baseUrlTip:
|
||||||
'如果您的 API 密钥来自 OpenAI,请忽略它。 任何其他中间提供商都会提供带有 API 密钥的基本 URL。',
|
'如果您的 API 密钥来自 OpenAI,请忽略它。 任何其他中间提供商都会提供带有 API 密钥的基本 URL。',
|
||||||
modify: '修改',
|
modify: '修改',
|
||||||
systemModelSettings: '系统模型设置',
|
systemModelSettings: '设置默认模型',
|
||||||
chatModel: '聊天模型',
|
chatModel: '聊天模型',
|
||||||
chatModelTip: '所有新创建的知识库都会使用默认的聊天模型。',
|
chatModelTip: '所有新创建的知识库都会使用默认的聊天模型。',
|
||||||
ttsModel: 'TTS模型',
|
ttsModel: 'TTS模型',
|
||||||
|
Loading…
x
Reference in New Issue
Block a user