Enhance `LLMNode` with multimodal capability, introducing support for
image outputs.
This implementation extracts base64-encoded images from LLM responses,
saves them to the storage service, and records the file metadata in the
`ToolFile` table. In conversations, these images are rendered as
markdown-based inline images.
Additionally, the images are included in the LLMNode's output as
file variables, enabling subsequent nodes in the workflow to utilize them.
To integrate file outputs into workflows, adjustments to the frontend code
are necessary.
For multimodal output functionality, updates to related model configurations
are required. Currently, this capability has been applied exclusively to
Google's Gemini models.
Close#15814.
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Introduces 'remove-first' and 'remove-last' operations for array
variables, allowing for removal of the first or last element
respectively. Ensures these operations are supported only for
array types. Includes unit tests to verify the correct behavior
when applied to arrays, including edge cases with empty arrays.
Signed-off-by: -LAN- <laipz8200@outlook.com>
Includes 'removeFirst' and 'removeLast' operations in the
set of conditions that bypass further validation checks.
Enhances logic to prevent unnecessary validation for
specific write operations.
Signed-off-by: -LAN- <laipz8200@outlook.com>
The `validators.url` method from the `validators==0.21.0` library enforces a
URL length limit of less than 90 characters, which led to failures in external
knowledge API requests for long URLs.
This PR addresses the issue by replacing `validators.url` with
`urllib.parse.urlparse`, effectively removing the restrictive URL length check.
Additionally, the unused `validators` dependency has been removed.
Fixes#18981.
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
When generating JSON schema using an LLM in the structured output feature,
models may occasionally return invalid JSON, which prevents clients from correctly
parsing the response and can lead to UI breakage.
This commit addresses the issue by introducing `json_repair` to automatically
fix invalid JSON strings returned by the LLM, ensuring smoother functionality
and better client-side handling of structured outputs.
Co-authored-by: lizb <lizb@sugon.com>