From 5e2fa131263b8fb1ae519f4f8d680b2458ba6007 Mon Sep 17 00:00:00 2001 From: Waffle <52460705+ox01024@users.noreply.github.com> Date: Thu, 8 Aug 2024 10:54:39 +0800 Subject: [PATCH] feat: support glm-4-long (#7070) Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> --- .../zhipuai/llm/glm_4_long.yaml | 33 +++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml diff --git a/api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml b/api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml new file mode 100644 index 0000000000..9d92e58f6c --- /dev/null +++ b/api/core/model_runtime/model_providers/zhipuai/llm/glm_4_long.yaml @@ -0,0 +1,33 @@ +model: glm-4-long +label: + en_US: glm-4-long +model_type: llm +features: + - multi-tool-call + - agent-thought + - stream-tool-call +model_properties: + mode: chat + context_size: 10240 +parameter_rules: + - name: temperature + use_template: temperature + default: 0.95 + min: 0.0 + max: 1.0 + help: + zh_Hans: 采样温度,控制输出的随机性,必须为正数取值范围是:(0.0,1.0],不能等于 0,默认值为 0.95 值越大,会使输出更随机,更具创造性;值越小,输出会更加稳定或确定建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 + en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is (0.0,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, The output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. + - name: top_p + use_template: top_p + default: 0.7 + min: 0.0 + max: 1.0 + help: + zh_Hans: 用温度取样的另一种方法,称为核取样取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1,默认值为 0.7 模型考虑具有 top_p 概率质量tokens的结果例如:0.1 意味着模型解码器只考虑从前 10% 的概率的候选集中取 tokens 建议您根据应用场景调整 top_p 或 temperature 参数,但不要同时调整两个参数。 + en_US: Another method of temperature sampling is called kernel sampling. The value range is (0.0, 1.0) open interval, which cannot be equal to 0 or 1. The default value is 0.7. The model considers the results with top_p probability mass tokens. For example 0.1 means The model decoder only considers tokens from the candidate set with the top 10% probability. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time. + - name: max_tokens + use_template: max_tokens + default: 1024 + min: 1 + max: 4096