mirror of
https://git.mirrors.martin98.com/https://github.com/infiniflow/ragflow.git
synced 2025-04-22 06:00:00 +08:00
Docs(api): align default values in create chat assistant HTTP API dos with implementation (#6764)
### What problem does this PR solve? align default values in create chat assistant HTTP API dos with implementation. llm.presence_penalty 0.2 -> 0.4 prompt.top_n 8->6 ### Type of change - [x] Documentation Update
This commit is contained in:
parent
e7a2a4b7ff
commit
6c77ef5a5e
@ -1533,14 +1533,14 @@ curl --request POST \
|
|||||||
- `"top_p"`: `float`
|
- `"top_p"`: `float`
|
||||||
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
|
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
|
||||||
- `"presence_penalty"`: `float`
|
- `"presence_penalty"`: `float`
|
||||||
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.2`.
|
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.4`.
|
||||||
- `"frequency penalty"`: `float`
|
- `"frequency penalty"`: `float`
|
||||||
Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
|
Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
|
||||||
- `"prompt"`: (*Body parameter*), `object`
|
- `"prompt"`: (*Body parameter*), `object`
|
||||||
Instructions for the LLM to follow. If it is not explicitly set, a JSON object with the following values will be generated as the default. A `prompt` JSON object contains the following attributes:
|
Instructions for the LLM to follow. If it is not explicitly set, a JSON object with the following values will be generated as the default. A `prompt` JSON object contains the following attributes:
|
||||||
- `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted reranking score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
|
- `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted reranking score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
|
||||||
- `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
|
- `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
|
||||||
- `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `8`.
|
- `"top_n"`: `int` This argument specifies the number of top chunks with similarity scores above the `similarity_threshold` that are fed to the LLM. The LLM will *only* access these 'top N' chunks. The default value is `6`.
|
||||||
- `"variables"`: `object[]` This argument lists the variables to use in the 'System' field of **Chat Configurations**. Note that:
|
- `"variables"`: `object[]` This argument lists the variables to use in the 'System' field of **Chat Configurations**. Note that:
|
||||||
- `"knowledge"` is a reserved variable, which represents the retrieved chunks.
|
- `"knowledge"` is a reserved variable, which represents the retrieved chunks.
|
||||||
- All the variables in 'System' should be curly bracketed.
|
- All the variables in 'System' should be curly bracketed.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user