Refactor UI text (#3911)

### What problem does this PR solve?

Refactor UI text

### Type of change

- [x] Documentation Update
- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
This commit is contained in:
Jin Hai 2024-12-07 11:04:36 +08:00 committed by GitHub
parent f284578cea
commit c817ff184b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 38 additions and 38 deletions

View File

@ -96,7 +96,7 @@ class Generate(ComponentBase):
}
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
res = {"content": answer, "reference": reference}
return res

View File

@ -32,7 +32,7 @@ def set_dialog():
req = request.json
dialog_id = req.get("dialog_id")
name = req.get("name", "New Dialog")
description = req.get("description", "A helpful Dialog")
description = req.get("description", "A helpful dialog")
icon = req.get("icon", "")
top_n = req.get("top_n", 6)
top_k = req.get("top_k", 1024)

View File

@ -266,7 +266,7 @@ def chat(dialog, messages, stream=True, **kwargs):
del c["vector"]
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
done_tm = timer()
prompt += "\n\n### Elapsed\n - Refine Question: %.1f ms\n - Keywords: %.1f ms\n - Retrieval: %.1f ms\n - LLM: %.1f ms" % (
(refineQ_tm - st) * 1000, (keyword_tm - refineQ_tm) * 1000, (retrieval_tm - keyword_tm) * 1000,
@ -649,7 +649,7 @@ def ask(question, kb_ids, tenant_id):
del c["vector"]
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
return {"answer": answer, "reference": refs}
answer = ""

View File

@ -78,7 +78,7 @@ Ollama is running
### 4. Add Ollama
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Ollama to RAGFlow:
In RAGFlow, click on your logo on the top right of the page **>** **Model providers** and add Ollama to RAGFlow:
![add ollama](https://github.com/infiniflow/ragflow/assets/93570324/10635088-028b-4b3d-add9-5c5a6e626814)
@ -101,7 +101,7 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
### 6. Update System Model Settings
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model:
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
*You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
@ -143,7 +143,7 @@ $ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --
```
### 4. Add Xinference
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Xinference to RAGFlow:
In RAGFlow, click on your logo on the top right of the page **>** **Model providers** and add Xinference to RAGFlow:
![add xinference](https://github.com/infiniflow/ragflow/assets/93570324/10635088-028b-4b3d-add9-5c5a6e626814)
@ -154,7 +154,7 @@ Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:
### 6. Update System Model Settings
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model.
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model.
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*

View File

@ -20,7 +20,7 @@ If you find your online LLM is not on the list, don't feel disheartened. The lis
You have two options for configuring your model API key:
- Configure it in **service_conf.yaml.template** before starting RAGFlow.
- Configure it on the **Model Providers** page after logging into RAGFlow.
- Configure it on the **Model providers** page after logging into RAGFlow.
### Configure model API key before starting up RAGFlow
@ -32,7 +32,7 @@ You have two options for configuring your model API key:
3. Reboot your system for your changes to take effect.
4. Log into RAGFlow.
*After logging into RAGFlow, you will find your chosen model appears under **Added models** on the **Model Providers** page.*
*After logging into RAGFlow, you will find your chosen model appears under **Added models** on the **Model providers** page.*
### Configure model API key after logging into RAGFlow
@ -40,9 +40,9 @@ You have two options for configuring your model API key:
After logging into RAGFlow, configuring your model API key through the **service_conf.yaml.template** file will no longer take effect.
:::
After logging into RAGFlow, you can *only* configure API Key on the **Model Providers** page:
After logging into RAGFlow, you can *only* configure API Key on the **Model providers** page:
1. Click on your logo on the top right of the page **>** **Model Providers**.
1. Click on your logo on the top right of the page **>** **Model providers**.
2. Find your model card under **Models to be added** and click **Add the model**:
![add model](https://github.com/infiniflow/ragflow/assets/93570324/07e43f63-367c-4c9c-8ed3-8a3a24703f4e)
3. Paste your model API key.

View File

@ -21,7 +21,7 @@ You start an AI conversation by creating an assistant.
- **Empty response**:
- If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then, when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
- If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
- **Show Quote**: This is a key feature of RAGFlow and enabled by default. RAGFlow does not work like a black box. instead, it clearly shows the sources of information that its responses are based on.
- **Show quote**: This is a key feature of RAGFlow and enabled by default. RAGFlow does not work like a black box. instead, it clearly shows the sources of information that its responses are based on.
- Select the corresponding knowledge bases. You can select one or multiple knowledge bases, but ensure that they use the same embedding model, otherwise an error would occur.
3. Update **Prompt Engine**:
@ -35,7 +35,7 @@ You start an AI conversation by creating an assistant.
4. Update **Model Setting**:
- In **Model**: you select the chat model. Though you have selected the default chat model in **System Model Settings**, RAGFlow allows you to choose an alternative chat model for your dialogue.
- **Freedom** refers to the level that the LLM improvises. From **Improvise**, **Precise**, to **Balance**, each freedom level corresponds to a unique combination of **Temperature**, **Top P**, **Presence Penalty**, and **Frequency Penalty**.
- **Freedom** refers to the level that the LLM improvises. From **Improvise**, **Precise**, to **Balance**, each freedom level corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
- **Temperature**: Level of the prediction randomness of the LLM. The higher the value, the more creative the LLM is.
- **Top P** is also known as "nucleus sampling". See [here](https://en.wikipedia.org/wiki/Top-p_sampling) for more information.
- **Max Tokens**: The maximum length of the LLM's responses. Note that the responses may be curtailed if this value is set too low.

View File

@ -235,7 +235,7 @@ RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalA
To add and configure an LLM:
1. Click on your logo on the top right of the page **>** **Model Providers**:
1. Click on your logo on the top right of the page **>** **Model providers**:
![add llm](https://github.com/infiniflow/ragflow/assets/93570324/10635088-028b-4b3d-add9-5c5a6e626814)

View File

@ -99,9 +99,9 @@ export default {
disabled: 'Disable',
action: 'Action',
parsingStatus: 'Parsing Status',
processBeginAt: 'Process Begin At',
processDuration: 'Process Duration',
progressMsg: 'Progress Msg',
processBeginAt: 'Begin at',
processDuration: 'Duration',
progressMsg: 'Progress',
testingDescription:
'Conduct a retrieval test to check if RAGFlow can recover the intended content for the LLM.',
similarityThreshold: 'Similarity threshold',
@ -151,7 +151,7 @@ export default {
chunk: 'Chunk',
bulk: 'Bulk',
cancel: 'Cancel',
rerankModel: 'Rerank Model',
rerankModel: 'Rerank model',
rerankPlaceholder: 'Please select',
rerankTip: `If left empty, RAGFlow will use a combination of weighted keyword similarity and weighted vector cosine similarity; if a rerank model is selected, a weighted reranking score will replace the weighted vector cosine similarity.`,
topK: 'Top-K',
@ -337,7 +337,7 @@ When you want to search the given knowledge base at first place, set a higher pa
chat: 'Chat',
newChat: 'New chat',
send: 'Send',
sendPlaceholder: 'Message the Assistant...',
sendPlaceholder: 'Message the assistant...',
chatConfiguration: 'Chat Configuration',
chatConfigurationDescription:
' Here, dress up a dedicated assistant for your special knowledge bases! 💕',
@ -351,7 +351,7 @@ When you want to search the given knowledge base at first place, set a higher pa
setAnOpener: 'Set an opener',
setAnOpenerInitial: `Hi! I'm your assistant, what can I do for you?`,
setAnOpenerTip: 'How do you want to welcome your clients?',
knowledgeBases: 'Knowledgebases',
knowledgeBases: 'Knowledge bases',
knowledgeBasesMessage: 'Please select',
knowledgeBasesTip: 'Select knowledgebases associated.',
system: 'System',
@ -389,21 +389,21 @@ When you want to search the given knowledge base at first place, set a higher pa
topPMessage: 'Top P is required',
topPTip:
'Also known as “nucleus sampling,” this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones.',
presencePenalty: 'Presence Penalty',
presencePenaltyMessage: 'Presence Penalty is required',
presencePenalty: 'Presence penalty',
presencePenaltyMessage: 'Presence penalty is required',
presencePenaltyTip:
'This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation.',
frequencyPenalty: 'Frequency Penalty',
frequencyPenaltyMessage: 'Frequency Penalty is required',
frequencyPenalty: 'Frequency penalty',
frequencyPenaltyMessage: 'Frequency penalty is required',
frequencyPenaltyTip:
'Similar to the presence penalty, this reduces the models tendency to repeat the same words frequently.',
maxTokens: 'Max Tokens',
maxTokensMessage: 'Max Tokens is required',
maxTokens: 'Max tokens',
maxTokensMessage: 'Max tokens is required',
maxTokensTip:
'This sets the maximum length of the models output, measured in the number of tokens (words or pieces of words).',
maxTokensInvalidMessage: 'Please enter a valid number for Max Tokens.',
maxTokensMinMessage: 'Max Tokens cannot be less than 0.',
quote: 'Show Quote',
quote: 'Show quote',
quoteTip: 'Should the source of the original text be displayed?',
selfRag: 'Self-RAG',
selfRagTip: 'Please refer to: https://huggingface.co/papers/2310.11511',
@ -461,7 +461,7 @@ When you want to search the given knowledge base at first place, set a higher pa
password: 'Password',
passwordDescription:
'Please enter your current password to change your password.',
model: 'Model Providers',
model: 'Model providers',
modelDescription: 'Set the model parameter and API KEY here.',
team: 'Team',
system: 'System',
@ -476,7 +476,7 @@ When you want to search the given knowledge base at first place, set a higher pa
colorSchemaPlaceholder: 'select your color schema',
bright: 'Bright',
dark: 'Dark',
timezone: 'Timezone',
timezone: 'Time zone',
timezoneMessage: 'Please input your timezone!',
timezonePlaceholder: 'select your timezone',
email: 'Email address',
@ -518,7 +518,7 @@ When you want to search the given knowledge base at first place, set a higher pa
sequence2txtModel: 'Sequence2txt model',
sequence2txtModelTip:
'The default ASR model all the newly created knowledgebase will use. Use this model to translate voices to corresponding text.',
rerankModel: 'Rerank Model',
rerankModel: 'Rerank model',
rerankModelTip: `The default rerank model is used to rerank chunks retrieved by users' questions.`,
ttsModel: 'TTS Model',
ttsModelTip:

View File

@ -98,7 +98,7 @@ export default {
processDuration: 'Duración del proceso',
progressMsg: 'Mensaje de progreso',
testingDescription:
'¡Último paso! Después del éxito, deja el resto al AI de Infiniflow.',
'¡Último paso! Después del éxito, deja el resto al AI de RAGFlow.',
similarityThreshold: 'Umbral de similitud',
similarityThresholdTip:
'Usamos una puntuación de similitud híbrida para evaluar la distancia entre dos líneas de texto. Se pondera la similitud de palabras clave y la similitud coseno de vectores. Si la similitud entre la consulta y el fragmento es menor que este umbral, el fragmento será filtrado.',

View File

@ -101,7 +101,7 @@ export default {
processBeginAt: '流程開始於',
processDuration: '過程持續時間',
progressMsg: '進度消息',
testingDescription: '最後一步!成功後,剩下的就交給Infiniflow AI吧。',
testingDescription: '最後一步!成功後,剩下的就交給 RAGFlow 吧。',
similarityThreshold: '相似度閾值',
similarityThresholdTip:
'我們使用混合相似度得分來評估兩行文本之間的距離。它是加權關鍵詞相似度和向量餘弦相似度。如果查詢和塊之間的相似度小於此閾值,則該塊將被過濾掉。',

View File

@ -98,10 +98,10 @@ export default {
disabled: '禁用',
action: '动作',
parsingStatus: '解析状态',
processBeginAt: '流程开始于',
processDuration: '过程持续时间',
progressMsg: '进度消息',
testingDescription: '最后一步! 成功后,剩下的就交给Infiniflow AI吧。',
processBeginAt: '开始于',
processDuration: '持续时间',
progressMsg: '进度',
testingDescription: '最后一步! 成功后,剩下的就交给 RAGFlow 吧。',
similarityThreshold: '相似度阈值',
similarityThresholdTip:
'我们使用混合相似度得分来评估两行文本之间的距离。 它是加权关键词相似度和向量余弦相似度。 如果查询和块之间的相似度小于此阈值,则该块将被过滤掉。',

View File

@ -24,7 +24,7 @@ const AssistantSetting = ({ show, form }: ISegmentedContentProps) => {
(checked: boolean) => {
if (checked && !data.tts_id) {
message.error(`Please set TTS model firstly.
Setting >> Model Providers >> System model settings`);
Setting >> Model providers >> System model settings`);
form.setFieldValue(['prompt_config', 'tts'], false);
}
},