mirror of
https://git.mirrors.martin98.com/https://github.com/langgenius/dify.git
synced 2025-06-30 06:45:10 +08:00
feat: Updata tongyi models (#9552)
This commit is contained in:
parent
37fea072bc
commit
9b32bfb3db
@ -76,3 +76,4 @@ pricing:
|
|||||||
output: '0.12'
|
output: '0.12'
|
||||||
unit: '0.001'
|
unit: '0.001'
|
||||||
currency: RMB
|
currency: RMB
|
||||||
|
deprecated: true
|
||||||
|
@ -10,7 +10,7 @@ features:
|
|||||||
- stream-tool-call
|
- stream-tool-call
|
||||||
model_properties:
|
model_properties:
|
||||||
mode: chat
|
mode: chat
|
||||||
context_size: 8000
|
context_size: 32000
|
||||||
parameter_rules:
|
parameter_rules:
|
||||||
- name: temperature
|
- name: temperature
|
||||||
use_template: temperature
|
use_template: temperature
|
||||||
@ -26,7 +26,7 @@ parameter_rules:
|
|||||||
type: int
|
type: int
|
||||||
default: 2000
|
default: 2000
|
||||||
min: 1
|
min: 1
|
||||||
max: 2000
|
max: 8192
|
||||||
help:
|
help:
|
||||||
zh_Hans: 用于指定模型在生成内容时token的最大数量,它定义了生成的上限,但不保证每次都会生成到这个数量。
|
zh_Hans: 用于指定模型在生成内容时token的最大数量,它定义了生成的上限,但不保证每次都会生成到这个数量。
|
||||||
en_US: It is used to specify the maximum number of tokens when the model generates content. It defines the upper limit of generation, but does not guarantee that this number will be generated every time.
|
en_US: It is used to specify the maximum number of tokens when the model generates content. It defines the upper limit of generation, but does not guarantee that this number will be generated every time.
|
||||||
|
@ -10,7 +10,7 @@ features:
|
|||||||
- stream-tool-call
|
- stream-tool-call
|
||||||
model_properties:
|
model_properties:
|
||||||
mode: chat
|
mode: chat
|
||||||
context_size: 131072
|
context_size: 128000
|
||||||
parameter_rules:
|
parameter_rules:
|
||||||
- name: temperature
|
- name: temperature
|
||||||
use_template: temperature
|
use_template: temperature
|
||||||
|
@ -10,7 +10,7 @@ features:
|
|||||||
- stream-tool-call
|
- stream-tool-call
|
||||||
model_properties:
|
model_properties:
|
||||||
mode: chat
|
mode: chat
|
||||||
context_size: 8000
|
context_size: 128000
|
||||||
parameter_rules:
|
parameter_rules:
|
||||||
- name: temperature
|
- name: temperature
|
||||||
use_template: temperature
|
use_template: temperature
|
||||||
@ -26,7 +26,7 @@ parameter_rules:
|
|||||||
type: int
|
type: int
|
||||||
default: 2000
|
default: 2000
|
||||||
min: 1
|
min: 1
|
||||||
max: 2000
|
max: 8192
|
||||||
help:
|
help:
|
||||||
zh_Hans: 用于指定模型在生成内容时token的最大数量,它定义了生成的上限,但不保证每次都会生成到这个数量。
|
zh_Hans: 用于指定模型在生成内容时token的最大数量,它定义了生成的上限,但不保证每次都会生成到这个数量。
|
||||||
en_US: It is used to specify the maximum number of tokens when the model generates content. It defines the upper limit of generation, but does not guarantee that this number will be generated every time.
|
en_US: It is used to specify the maximum number of tokens when the model generates content. It defines the upper limit of generation, but does not guarantee that this number will be generated every time.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user