Merge branch 'main' into feat/attachments

This commit is contained in:
StyleZhang 2024-10-09 17:48:05 +08:00
commit 820076beec
173 changed files with 5800 additions and 744 deletions

View File

@ -39,7 +39,7 @@ jobs:
api/pyproject.toml
api/poetry.lock
- name: Poetry check
- name: Check Poetry lockfile
run: |
poetry check -C api --lock
poetry show -C api
@ -47,6 +47,9 @@ jobs:
- name: Install dependencies
run: poetry install -C api --with dev
- name: Check dependencies in pyproject.toml
run: poetry run -C api bash dev/pytest/pytest_artifacts.sh
- name: Run Unit tests
run: poetry run -C api bash dev/pytest/pytest_unit_tests.sh

View File

@ -196,10 +196,14 @@ If you'd like to configure a highly-available setup, there are community-contrib
#### Using Terraform for Deployment
Deploy Dify to Cloud Platform with a single click using [terraform](https://www.terraform.io/)
##### Azure Global
Deploy Dify to Azure with a single click using [terraform](https://www.terraform.io/).
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).

View File

@ -179,10 +179,13 @@ docker compose up -d
#### استخدام Terraform للتوزيع
انشر Dify إلى منصة السحابة بنقرة واحدة باستخدام [terraform](https://www.terraform.io/)
##### Azure Global
استخدم [terraform](https://www.terraform.io/) لنشر Dify على Azure بنقرة واحدة.
- [Azure Terraform بواسطة @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform بواسطة @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## المساهمة

View File

@ -202,10 +202,14 @@ docker compose up -d
#### 使用 Terraform 部署
使用 [terraform](https://www.terraform.io/) 一键将 Dify 部署到云平台
##### Azure Global
使用 [terraform](https://www.terraform.io/) 一键部署 Dify 到 Azure。
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)

View File

@ -204,10 +204,13 @@ Si desea configurar una configuración de alta disponibilidad, la comunidad prop
#### Uso de Terraform para el despliegue
Despliega Dify en una plataforma en la nube con un solo clic utilizando [terraform](https://www.terraform.io/)
##### Azure Global
Utiliza [terraform](https://www.terraform.io/) para desplegar Dify en Azure con un solo clic.
- [Azure Terraform por @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform por @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contribuir

View File

@ -202,10 +202,13 @@ Si vous souhaitez configurer une configuration haute disponibilité, la communau
#### Utilisation de Terraform pour le déploiement
Déployez Dify sur une plateforme cloud en un clic en utilisant [terraform](https://www.terraform.io/)
##### Azure Global
Utilisez [terraform](https://www.terraform.io/) pour déployer Dify sur Azure en un clic.
- [Azure Terraform par @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform par @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contribuer

View File

@ -68,7 +68,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
プロンプトの作成、モデルパフォーマンスの比較が行え、チャットベースのアプリに音声合成などの機能も追加できます。
**4. RAGパイプライン**:
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサーポイントも提供します。
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサポートも提供します。
**5. エージェント機能**:
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DALL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
@ -201,10 +201,13 @@ docker compose up -d
#### Terraformを使用したデプロイ
##### Azure Global
[terraform](https://www.terraform.io/) を使用して、AzureにDifyをワンクリックでデプロイします。
- [nikawangのAzure Terraform](https://github.com/nikawang/dify-azure-terraform)
[terraform](https://www.terraform.io/) を使用して、ワンクリックでDifyをクラウドプラットフォームにデプロイします
##### Azure Global
- [@nikawangによるAzure Terraform](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [@sotazumによるGoogle Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
## 貢献

View File

@ -202,10 +202,13 @@ If you'd like to configure a highly-available setup, there are community-contrib
#### Terraform atorlugu pilersitsineq
##### Azure Global
Atoruk [terraform](https://www.terraform.io/) Dify-mik Azure-mut ataatsikkut ikkussuilluarlugu.
- [Azure Terraform atorlugu @nikawang](https://github.com/nikawang/dify-azure-terraform)
wa'logh nIqHom neH ghun deployment toy'wI' [terraform](https://www.terraform.io/) lo'laH.
##### Azure Global
- [Azure Terraform mung @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform qachlot @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contributing

View File

@ -39,7 +39,6 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
</p>
@ -195,10 +194,14 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
#### Terraform을 사용한 배포
[terraform](https://www.terraform.io/)을 사용하여 단 한 번의 클릭으로 Dify를 클라우드 플랫폼에 배포하십시오
##### Azure Global
[terraform](https://www.terraform.io/)을 사용하여 Azure에 Dify를 원클릭으로 배포하세요.
- [nikawang의 Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [sotazum의 Google Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
## 기여
코드에 기여하고 싶은 분들은 [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.

View File

@ -200,9 +200,13 @@ Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'
#### Dağıtım için Terraform Kullanımı
Dify'ı bulut platformuna tek tıklamayla dağıtın [terraform](https://www.terraform.io/) kullanarak
##### Azure Global
[Terraform](https://www.terraform.io/) kullanarak Dify'ı Azure'a tek tıklamayla dağıtın.
- [@nikawang tarafından Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
- [Azure Terraform tarafından @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform tarafından @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Katkıda Bulunma

View File

@ -196,10 +196,14 @@ Nếu bạn muốn cấu hình một cài đặt có độ sẵn sàng cao, có
#### Sử dụng Terraform để Triển khai
Triển khai Dify lên nền tảng đám mây với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/)
##### Azure Global
Triển khai Dify lên Azure chỉ với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/).
- [Azure Terraform bởi @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform bởi @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Đóng góp
Đối với những người muốn đóng góp mã, xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.

View File

@ -39,7 +39,7 @@ DB_DATABASE=dify
# Storage configuration
# use for store upload files, private keys...
# storage type: local, s3, azure-blob, google-storage, tencent-cos, huawei-obs, volcengine-tos
# storage type: local, s3, azure-blob, google-storage, tencent-cos, huawei-obs, volcengine-tos, baidu-obs
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage
S3_USE_AWS_MANAGED_IAM=false
@ -79,6 +79,12 @@ HUAWEI_OBS_SECRET_KEY=your-secret-key
HUAWEI_OBS_ACCESS_KEY=your-access-key
HUAWEI_OBS_SERVER=your-server-url
# Baidu OBS Storage Configuration
BAIDU_OBS_BUCKET_NAME=your-bucket-name
BAIDU_OBS_SECRET_KEY=your-secret-key
BAIDU_OBS_ACCESS_KEY=your-access-key
BAIDU_OBS_ENDPOINT=your-server-url
# OCI Storage configuration
OCI_ENDPOINT=your-endpoint
OCI_BUCKET_NAME=your-bucket-name
@ -265,6 +271,9 @@ HTTP_REQUEST_MAX_WRITE_TIMEOUT=600
HTTP_REQUEST_NODE_MAX_BINARY_SIZE=10485760
HTTP_REQUEST_NODE_MAX_TEXT_SIZE=1048576
# Respect X-* headers to redirect clients
RESPECT_XFORWARD_HEADERS_ENABLED=false
# Log file path
LOG_FILE=

View File

@ -26,7 +26,7 @@ from commands import register_commands
from configs import dify_config
# DO NOT REMOVE BELOW
from events import event_handlers
from events import event_handlers # noqa: F401
from extensions import (
ext_celery,
ext_code_based_extension,
@ -36,6 +36,7 @@ from extensions import (
ext_login,
ext_mail,
ext_migrate,
ext_proxy_fix,
ext_redis,
ext_sentry,
ext_storage,
@ -45,7 +46,7 @@ from extensions.ext_login import login_manager
from libs.passport import PassportService
# TODO: Find a way to avoid importing models here
from models import account, dataset, model, source, task, tool, tools, web
from models import account, dataset, model, source, task, tool, tools, web # noqa: F401
from services.account_service import AccountService
# DO NOT REMOVE ABOVE
@ -156,6 +157,7 @@ def initialize_extensions(app):
ext_mail.init_app(app)
ext_hosting_provider.init_app(app)
ext_sentry.init_app(app)
ext_proxy_fix.init_app(app)
# Flask-Login configuration
@ -181,10 +183,10 @@ def load_user_from_request(request_from_flask_login):
decoded = PassportService().verify(auth_token)
user_id = decoded.get("user_id")
account = AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
if account:
contexts.tenant_id.set(account.current_tenant_id)
return account
logged_in_account = AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
if logged_in_account:
contexts.tenant_id.set(logged_in_account.current_tenant_id)
return logged_in_account
@login_manager.unauthorized_handler

View File

@ -247,6 +247,12 @@ class HttpConfig(BaseSettings):
default=None,
)
RESPECT_XFORWARD_HEADERS_ENABLED: bool = Field(
description="Enable or disable the X-Forwarded-For Proxy Fix middleware from Werkzeug"
" to respect X-* headers to redirect clients",
default=False,
)
class InnerAPIConfig(BaseSettings):
"""

View File

@ -8,6 +8,7 @@ from configs.middleware.cache.redis_config import RedisConfig
from configs.middleware.storage.aliyun_oss_storage_config import AliyunOSSStorageConfig
from configs.middleware.storage.amazon_s3_storage_config import S3StorageConfig
from configs.middleware.storage.azure_blob_storage_config import AzureBlobStorageConfig
from configs.middleware.storage.baidu_obs_storage_config import BaiduOBSStorageConfig
from configs.middleware.storage.google_cloud_storage_config import GoogleCloudStorageConfig
from configs.middleware.storage.huawei_obs_storage_config import HuaweiCloudOBSStorageConfig
from configs.middleware.storage.oci_storage_config import OCIStorageConfig
@ -200,12 +201,13 @@ class MiddlewareConfig(
StorageConfig,
AliyunOSSStorageConfig,
AzureBlobStorageConfig,
BaiduOBSStorageConfig,
GoogleCloudStorageConfig,
TencentCloudCOSStorageConfig,
HuaweiCloudOBSStorageConfig,
VolcengineTOSStorageConfig,
S3StorageConfig,
OCIStorageConfig,
S3StorageConfig,
TencentCloudCOSStorageConfig,
VolcengineTOSStorageConfig,
# configs of vdb and vdb providers
VectorStoreConfig,
AnalyticdbConfig,

View File

@ -0,0 +1,29 @@
from typing import Optional
from pydantic import BaseModel, Field
class BaiduOBSStorageConfig(BaseModel):
"""
Configuration settings for Baidu Object Storage Service (OBS)
"""
BAIDU_OBS_BUCKET_NAME: Optional[str] = Field(
description="Name of the Baidu OBS bucket to store and retrieve objects (e.g., 'my-obs-bucket')",
default=None,
)
BAIDU_OBS_ACCESS_KEY: Optional[str] = Field(
description="Access Key ID for authenticating with Baidu OBS",
default=None,
)
BAIDU_OBS_SECRET_KEY: Optional[str] = Field(
description="Secret Access Key for authenticating with Baidu OBS",
default=None,
)
BAIDU_OBS_ENDPOINT: Optional[str] = Field(
description="URL of the Baidu OSS endpoint for your chosen region (e.g., 'https://.bj.bcebos.com')",
default=None,
)

View File

@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description="Dify version",
default="0.8.3",
default="0.9.1",
)
COMMIT_SHA: str = Field(

View File

@ -188,6 +188,7 @@ class ChatConversationApi(Resource):
subquery.c.from_end_user_session_id.ilike(keyword_filter),
),
)
.group_by(Conversation.id)
)
account = current_user

View File

@ -0,0 +1,7 @@
from libs.exception import BaseHTTPException
class UnsupportedFileTypeError(BaseHTTPException):
error_code = "unsupported_file_type"
description = "File type not allowed."
code = 415

View File

@ -4,7 +4,7 @@ from werkzeug.exceptions import NotFound
import services
from controllers.files import api
from libs.exception import BaseHTTPException
from controllers.files.error import UnsupportedFileTypeError
from services.account_service import TenantService
from services.file_service import FileService
@ -50,9 +50,3 @@ class WorkspaceWebappLogoApi(Resource):
api.add_resource(ImagePreviewApi, "/files/<uuid:file_id>/image-preview")
api.add_resource(WorkspaceWebappLogoApi, "/files/workspaces/<uuid:workspace_id>/webapp-logo")
class UnsupportedFileTypeError(BaseHTTPException):
error_code = "unsupported_file_type"
description = "File type not allowed."
code = 415

View File

@ -3,8 +3,8 @@ from flask_restful import Resource, reqparse
from werkzeug.exceptions import Forbidden, NotFound
from controllers.files import api
from controllers.files.error import UnsupportedFileTypeError
from core.tools.tool_file_manager import ToolFileManager
from libs.exception import BaseHTTPException
class ToolFilePreviewApi(Resource):
@ -43,9 +43,3 @@ class ToolFilePreviewApi(Resource):
api.add_resource(ToolFilePreviewApi, "/files/tools/<uuid:file_id>.<string:extension>")
class UnsupportedFileTypeError(BaseHTTPException):
error_code = "unsupported_file_type"
description = "File type not allowed."
code = 415

View File

@ -4,6 +4,7 @@ from flask_restful import Resource, reqparse
from werkzeug.exceptions import InternalServerError, NotFound
import services
from constants import UUID_NIL
from controllers.service_api import api
from controllers.service_api.app.error import (
AppUnavailableError,
@ -107,6 +108,7 @@ class ChatApi(Resource):
parser.add_argument("conversation_id", type=uuid_value, location="json")
parser.add_argument("retriever_from", type=str, required=False, default="dev", location="json")
parser.add_argument("auto_generate_name", type=bool, required=False, default=True, location="json")
parser.add_argument("parent_message_id", type=uuid_value, required=False, default=UUID_NIL, location="json")
args = parser.parse_args()

View File

@ -369,7 +369,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
return message
def _organize_historic_prompt_messages(
self, current_session_messages: list[PromptMessage] = None
self, current_session_messages: Optional[list[PromptMessage]] = None
) -> list[PromptMessage]:
"""
organize historic prompt messages

View File

@ -27,7 +27,7 @@ class CotChatAgentRunner(CotAgentRunner):
return SystemPromptMessage(content=system_prompt)
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""

View File

@ -1,4 +1,5 @@
import json
from typing import Optional
from core.agent.cot_agent_runner import CotAgentRunner
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage, UserPromptMessage
@ -21,7 +22,7 @@ class CotCompletionAgentRunner(CotAgentRunner):
return system_prompt
def _organize_historic_prompt(self, current_session_messages: list[PromptMessage] = None) -> str:
def _organize_historic_prompt(self, current_session_messages: Optional[list[PromptMessage]] = None) -> str:
"""
Organize historic prompt
"""

View File

@ -2,7 +2,7 @@ import json
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any, Union
from typing import Any, Optional, Union
from core.agent.base_agent_runner import BaseAgentRunner
from core.app.apps.base_app_queue_manager import PublishFrom
@ -370,7 +370,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return tool_calls
def _init_system_message(
self, prompt_template: str, prompt_messages: list[PromptMessage] = None
self, prompt_template: str, prompt_messages: Optional[list[PromptMessage]] = None
) -> list[PromptMessage]:
"""
Initialize system message
@ -385,7 +385,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return prompt_messages
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""

View File

@ -14,7 +14,7 @@ class CotAgentOutputParser:
) -> Generator[Union[str, AgentScratchpadUnit.Action], None, None]:
def parse_action(json_str):
try:
action = json.loads(json_str)
action = json.loads(json_str, strict=False)
action_name = None
action_input = None

View File

@ -1,9 +1,9 @@
import os
from collections.abc import Mapping, Sequence
from typing import Any, Optional, TextIO, Union
from pydantic import BaseModel
from configs import dify_config
from core.ops.entities.trace_entity import TraceTaskName
from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.tools.entities.tool_entities import ToolInvokeMessage
@ -50,7 +50,8 @@ class DifyAgentCallbackHandler(BaseModel):
tool_inputs: Mapping[str, Any],
) -> None:
"""Do nothing."""
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
if dify_config.DEBUG:
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
def on_tool_end(
self,
@ -62,11 +63,12 @@ class DifyAgentCallbackHandler(BaseModel):
trace_manager: Optional[TraceQueueManager] = None,
) -> None:
"""If not the final action, print out observation."""
print_text("\n[on_tool_end]\n", color=self.color)
print_text("Tool: " + tool_name + "\n", color=self.color)
print_text("Inputs: " + str(tool_inputs) + "\n", color=self.color)
print_text("Outputs: " + str(tool_outputs)[:1000] + "\n", color=self.color)
print_text("\n")
if dify_config.DEBUG:
print_text("\n[on_tool_end]\n", color=self.color)
print_text("Tool: " + tool_name + "\n", color=self.color)
print_text("Inputs: " + str(tool_inputs) + "\n", color=self.color)
print_text("Outputs: " + str(tool_outputs)[:1000] + "\n", color=self.color)
print_text("\n")
if trace_manager:
trace_manager.add_trace_task(
@ -82,30 +84,33 @@ class DifyAgentCallbackHandler(BaseModel):
def on_tool_error(self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any) -> None:
"""Do nothing."""
print_text("\n[on_tool_error] Error: " + str(error) + "\n", color="red")
if dify_config.DEBUG:
print_text("\n[on_tool_error] Error: " + str(error) + "\n", color="red")
def on_agent_start(self, thought: str) -> None:
"""Run on agent start."""
if thought:
print_text(
"\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\nThought: " + thought + "\n",
color=self.color,
)
else:
print_text("\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\n", color=self.color)
if dify_config.DEBUG:
if thought:
print_text(
"\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\nThought: " + thought + "\n",
color=self.color,
)
else:
print_text("\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\n", color=self.color)
def on_agent_finish(self, color: Optional[str] = None, **kwargs: Any) -> None:
"""Run on agent end."""
print_text("\n[on_agent_finish]\n Loop: " + str(self.current_loop) + "\n", color=self.color)
if dify_config.DEBUG:
print_text("\n[on_agent_finish]\n Loop: " + str(self.current_loop) + "\n", color=self.color)
self.current_loop += 1
@property
def ignore_agent(self) -> bool:
"""Whether to ignore agent callbacks."""
return not os.environ.get("DEBUG") or os.environ.get("DEBUG").lower() != "true"
return not dify_config.DEBUG
@property
def ignore_chat_model(self) -> bool:
"""Whether to ignore chat model callbacks."""
return not os.environ.get("DEBUG") or os.environ.get("DEBUG").lower() != "true"
return not dify_config.DEBUG

View File

@ -44,7 +44,6 @@ class DatasetIndexToolCallbackHandler:
DocumentSegment.index_node_id == document.metadata["doc_id"]
)
# if 'dataset_id' in document.metadata:
if "dataset_id" in document.metadata:
query = query.filter(DocumentSegment.dataset_id == document.metadata["dataset_id"])

View File

@ -198,16 +198,34 @@ class MessageFileParser:
if "amazonaws.com" not in parsed_url.netloc:
return False
query_params = parse_qs(parsed_url.query)
required_params = ["Signature", "Expires"]
for param in required_params:
if param not in query_params:
def check_presign_v2(query_params):
required_params = ["Signature", "Expires"]
for param in required_params:
if param not in query_params:
return False
if not query_params["Expires"][0].isdigit():
return False
if not query_params["Expires"][0].isdigit():
return False
signature = query_params["Signature"][0]
if not re.match(r"^[A-Za-z0-9+/]+={0,2}$", signature):
return False
return True
signature = query_params["Signature"][0]
if not re.match(r"^[A-Za-z0-9+/]+={0,2}$", signature):
return False
return True
def check_presign_v4(query_params):
required_params = ["X-Amz-Signature", "X-Amz-Expires"]
for param in required_params:
if param not in query_params:
return False
if not query_params["X-Amz-Expires"][0].isdigit():
return False
signature = query_params["X-Amz-Signature"][0]
if not re.match(r"^[A-Za-z0-9+/]+={0,2}$", signature):
return False
return True
return check_presign_v4(query_params) or check_presign_v2(query_params)
except Exception:
return False

View File

@ -211,9 +211,9 @@ class IndexingRunner:
tenant_id: str,
extract_settings: list[ExtractSetting],
tmp_processing_rule: dict,
doc_form: str = None,
doc_form: Optional[str] = None,
doc_language: str = "English",
dataset_id: str = None,
dataset_id: Optional[str] = None,
indexing_technique: str = "economy",
) -> dict:
"""

View File

@ -169,7 +169,7 @@ class AnthropicLargeLanguageModel(LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

View File

@ -1081,8 +1081,81 @@ LLM_BASE_MODELS = [
),
),
),
AzureBaseModel(
base_model_name="o1-preview",
entity=AIModelEntity(
model="fake-deployment-name",
label=I18nObject(
en_US="fake-deployment-name-label",
),
model_type=ModelType.LLM,
features=[
ModelFeature.AGENT_THOUGHT,
],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.MODE: LLMMode.CHAT.value,
ModelPropertyKey.CONTEXT_SIZE: 128000,
},
parameter_rules=[
ParameterRule(
name="response_format",
label=I18nObject(zh_Hans="回复格式", en_US="response_format"),
type="string",
help=I18nObject(
zh_Hans="指定模型必须输出的格式", en_US="specifying the format that the model must output"
),
required=False,
options=["text", "json_object"],
),
_get_max_tokens(default=512, min_val=1, max_val=32768),
],
pricing=PriceConfig(
input=15.00,
output=60.00,
unit=0.000001,
currency="USD",
),
),
),
AzureBaseModel(
base_model_name="o1-mini",
entity=AIModelEntity(
model="fake-deployment-name",
label=I18nObject(
en_US="fake-deployment-name-label",
),
model_type=ModelType.LLM,
features=[
ModelFeature.AGENT_THOUGHT,
],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.MODE: LLMMode.CHAT.value,
ModelPropertyKey.CONTEXT_SIZE: 128000,
},
parameter_rules=[
ParameterRule(
name="response_format",
label=I18nObject(zh_Hans="回复格式", en_US="response_format"),
type="string",
help=I18nObject(
zh_Hans="指定模型必须输出的格式", en_US="specifying the format that the model must output"
),
required=False,
options=["text", "json_object"],
),
_get_max_tokens(default=512, min_val=1, max_val=65536),
],
pricing=PriceConfig(
input=3.00,
output=12.00,
unit=0.000001,
currency="USD",
),
),
),
]
EMBEDDING_BASE_MODELS = [
AzureBaseModel(
base_model_name="text-embedding-ada-002",

View File

@ -120,6 +120,18 @@ model_credential_schema:
show_on:
- variable: __model_type
value: llm
- label:
en_US: o1-mini
value: o1-mini
show_on:
- variable: __model_type
value: llm
- label:
en_US: o1-preview
value: o1-preview
show_on:
- variable: __model_type
value: llm
- label:
en_US: gpt-4o-mini
value: gpt-4o-mini

View File

@ -312,20 +312,118 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
if user:
extra_model_kwargs["user"] = user
# chat model
messages = [self._convert_prompt_message_to_dict(m) for m in prompt_messages]
response = client.chat.completions.create(
messages=messages,
model=model,
stream=stream,
**model_parameters,
**extra_model_kwargs,
# clear illegal prompt messages
prompt_messages = self._clear_illegal_prompt_messages(model, prompt_messages)
block_as_stream = False
if model.startswith("o1"):
if stream:
block_as_stream = True
stream = False
if "stream_options" in extra_model_kwargs:
del extra_model_kwargs["stream_options"]
if "stop" in extra_model_kwargs:
del extra_model_kwargs["stop"]
# chat model
response = client.chat.completions.create(
messages=[self._convert_prompt_message_to_dict(m) for m in prompt_messages],
model=model,
stream=stream,
**model_parameters,
**extra_model_kwargs,
)
if stream:
return self._handle_chat_generate_stream_response(model, credentials, response, prompt_messages, tools)
block_result = self._handle_chat_generate_response(model, credentials, response, prompt_messages, tools)
if block_as_stream:
return self._handle_chat_block_as_stream_response(block_result, prompt_messages, stop)
return block_result
def _handle_chat_block_as_stream_response(
self,
block_result: LLMResult,
prompt_messages: list[PromptMessage],
stop: Optional[list[str]] = None,
) -> Generator[LLMResultChunk, None, None]:
"""
Handle llm chat response
:param model: model name
:param credentials: credentials
:param response: response
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:param stop: stop words
:return: llm response chunk generator
"""
text = block_result.message.content
text = cast(str, text)
if stop:
text = self.enforce_stop_tokens(text, stop)
yield LLMResultChunk(
model=block_result.model,
prompt_messages=prompt_messages,
system_fingerprint=block_result.system_fingerprint,
delta=LLMResultChunkDelta(
index=0,
message=AssistantPromptMessage(content=text),
finish_reason="stop",
usage=block_result.usage,
),
)
if stream:
return self._handle_chat_generate_stream_response(model, credentials, response, prompt_messages, tools)
def _clear_illegal_prompt_messages(self, model: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Clear illegal prompt messages for OpenAI API
return self._handle_chat_generate_response(model, credentials, response, prompt_messages, tools)
:param model: model name
:param prompt_messages: prompt messages
:return: cleaned prompt messages
"""
checklist = ["gpt-4-turbo", "gpt-4-turbo-2024-04-09"]
if model in checklist:
# count how many user messages are there
user_message_count = len([m for m in prompt_messages if isinstance(m, UserPromptMessage)])
if user_message_count > 1:
for prompt_message in prompt_messages:
if isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, list):
prompt_message.content = "\n".join(
[
item.data
if item.type == PromptMessageContentType.TEXT
else "[IMAGE]"
if item.type == PromptMessageContentType.IMAGE
else ""
for item in prompt_message.content
]
)
if model.startswith("o1"):
system_message_count = len([m for m in prompt_messages if isinstance(m, SystemPromptMessage)])
if system_message_count > 0:
new_prompt_messages = []
for prompt_message in prompt_messages:
if isinstance(prompt_message, SystemPromptMessage):
prompt_message = UserPromptMessage(
content=prompt_message.content,
name=prompt_message.name,
)
new_prompt_messages.append(prompt_message)
prompt_messages = new_prompt_messages
return prompt_messages
def _handle_chat_generate_response(
self,
@ -560,7 +658,7 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
tokens_per_message = 4
# if there's a name, the role is omitted
tokens_per_name = -1
elif model.startswith("gpt-35-turbo") or model.startswith("gpt-4"):
elif model.startswith("gpt-35-turbo") or model.startswith("gpt-4") or model.startswith("o1"):
tokens_per_message = 3
tokens_per_name = 1
else:

View File

@ -50,34 +50,62 @@ provider_credential_schema:
label:
en_US: US East (N. Virginia)
zh_Hans: 美国东部 (弗吉尼亚北部)
- value: us-east-2
label:
en_US: US East (Ohio)
zh_Hans: 美国东部 (弗吉尼亚北部)
- value: us-west-2
label:
en_US: US West (Oregon)
zh_Hans: 美国西部 (俄勒冈州)
- value: ap-south-1
label:
en_US: Asia Pacific (Mumbai)
zh_Hans: 亚太地区(孟买)
- value: ap-southeast-1
label:
en_US: Asia Pacific (Singapore)
zh_Hans: 亚太地区 (新加坡)
- value: ap-northeast-1
label:
en_US: Asia Pacific (Tokyo)
zh_Hans: 亚太地区 (东京)
- value: eu-central-1
label:
en_US: Europe (Frankfurt)
zh_Hans: 欧洲 (法兰克福)
- value: eu-west-2
label:
en_US: Eu west London (London)
zh_Hans: 欧洲西部 (伦敦)
- value: us-gov-west-1
label:
en_US: AWS GovCloud (US-West)
zh_Hans: AWS GovCloud (US-West)
- value: ap-southeast-2
label:
en_US: Asia Pacific (Sydney)
zh_Hans: 亚太地区 (悉尼)
- value: ap-northeast-1
label:
en_US: Asia Pacific (Tokyo)
zh_Hans: 亚太地区 (东京)
- value: ap-northeast-2
label:
en_US: Asia Pacific (Seoul)
zh_Hans: 亚太地区(首尔)
- value: ca-central-1
label:
en_US: Canada (Central)
zh_Hans: 加拿大(中部)
- value: eu-central-1
label:
en_US: Europe (Frankfurt)
zh_Hans: 欧洲 (法兰克福)
- value: eu-west-1
label:
en_US: Europe (Ireland)
zh_Hans: 欧洲(爱尔兰)
- value: eu-west-2
label:
en_US: Europe (London)
zh_Hans: 欧洲西部 (伦敦)
- value: eu-west-3
label:
en_US: Europe (Paris)
zh_Hans: 欧洲(巴黎)
- value: sa-east-1
label:
en_US: South America (São Paulo)
zh_Hans: 南美洲(圣保罗)
- value: us-gov-west-1
label:
en_US: AWS GovCloud (US-West)
zh_Hans: AWS GovCloud (US-West)
- variable: model_for_validation
required: false
label:

View File

@ -92,7 +92,7 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

View File

@ -511,7 +511,7 @@ class FireworksLargeLanguageModel(_CommonFireworks, LargeLanguageModel):
model: str,
messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None,
credentials: dict = None,
credentials: Optional[dict] = None,
) -> int:
"""
Approximate num tokens with GPT2 tokenizer.

View File

@ -0,0 +1,15 @@
- gemini-1.5-pro
- gemini-1.5-pro-latest
- gemini-1.5-pro-001
- gemini-1.5-pro-002
- gemini-1.5-pro-exp-0801
- gemini-1.5-pro-exp-0827
- gemini-1.5-flash
- gemini-1.5-flash-latest
- gemini-1.5-flash-001
- gemini-1.5-flash-002
- gemini-1.5-flash-exp-0827
- gemini-1.5-flash-8b-exp-0827
- gemini-1.5-flash-8b-exp-0924
- gemini-pro
- gemini-pro-vision

View File

@ -5,3 +5,4 @@
- llama3-8b-8192
- mixtral-8x7b-32768
- llama2-70b-4096
- llama-guard-3-8b

View File

@ -0,0 +1,25 @@
model: llama-guard-3-8b
label:
zh_Hans: Llama-Guard-3-8B
en_US: Llama-Guard-3-8B
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: max_tokens
use_template: max_tokens
default: 512
min: 1
max: 8192
pricing:
input: '0.20'
output: '0.20'
unit: '0.000001'
currency: USD

View File

@ -111,7 +111,7 @@ class OpenAILargeLanguageModel(_CommonOpenAI, LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

View File

@ -2,6 +2,8 @@ from typing import IO, Optional
from openai import OpenAI
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.model_providers.__base.speech2text_model import Speech2TextModel
from core.model_runtime.model_providers.openai._common import _CommonOpenAI
@ -58,3 +60,18 @@ class OpenAISpeech2TextModel(_CommonOpenAI, Speech2TextModel):
response = client.audio.transcriptions.create(model=model, file=file)
return response.text
def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
"""
used to define customizable model schema
"""
entity = AIModelEntity(
model=model,
label=I18nObject(en_US=model),
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_type=ModelType.SPEECH2TEXT,
model_properties={},
parameter_rules=[],
)
return entity

View File

@ -688,7 +688,7 @@ class OAIAPICompatLargeLanguageModel(_CommonOaiApiCompat, LargeLanguageModel):
model: str,
messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None,
credentials: dict = None,
credentials: Optional[dict] = None,
) -> int:
"""
Approximate num tokens with GPT2 tokenizer.

View File

@ -3,6 +3,8 @@ from urllib.parse import urljoin
import requests
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.model_providers.__base.speech2text_model import Speech2TextModel
@ -59,3 +61,18 @@ class OAICompatSpeech2TextModel(_CommonOaiApiCompat, Speech2TextModel):
self._invoke(model, credentials, audio_file)
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
"""
used to define customizable model schema
"""
entity = AIModelEntity(
model=model,
label=I18nObject(en_US=model),
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_type=ModelType.SPEECH2TEXT,
model_properties={},
parameter_rules=[],
)
return entity

View File

@ -14,6 +14,10 @@
- google/gemini-pro
- cohere/command-r-plus
- cohere/command-r
- meta-llama/llama-3.2-1b-instruct
- meta-llama/llama-3.2-3b-instruct
- meta-llama/llama-3.2-11b-vision-instruct
- meta-llama/llama-3.2-90b-vision-instruct
- meta-llama/llama-3.1-405b-instruct
- meta-llama/llama-3.1-70b-instruct
- meta-llama/llama-3.1-8b-instruct
@ -22,6 +26,7 @@
- mistralai/mixtral-8x22b-instruct
- mistralai/mixtral-8x7b-instruct
- mistralai/mistral-7b-instruct
- qwen/qwen-2.5-72b-instruct
- qwen/qwen-2-72b-instruct
- deepseek/deepseek-chat
- deepseek/deepseek-coder

View File

@ -27,9 +27,9 @@ parameter_rules:
- name: max_tokens
use_template: max_tokens
required: true
default: 4096
default: 8192
min: 1
max: 4096
max: 8192
- name: response_format
use_template: response_format
pricing:

View File

@ -0,0 +1,45 @@
model: meta-llama/llama-3.2-11b-vision-instruct
label:
zh_Hans: llama-3.2-11b-vision-instruct
en_US: llama-3.2-11b-vision-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
- name: max_tokens
use_template: max_tokens
- name: context_length_exceeded_behavior
default: None
label:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
help:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
type: string
options:
- None
- truncate
- error
- name: response_format
use_template: response_format
pricing:
input: '0.055'
output: '0.055'
unit: '0.000001'
currency: USD

View File

@ -0,0 +1,45 @@
model: meta-llama/llama-3.2-1b-instruct
label:
zh_Hans: llama-3.2-1b-instruct
en_US: llama-3.2-1b-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
- name: max_tokens
use_template: max_tokens
- name: context_length_exceeded_behavior
default: None
label:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
help:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
type: string
options:
- None
- truncate
- error
- name: response_format
use_template: response_format
pricing:
input: '0.01'
output: '0.02'
unit: '0.000001'
currency: USD

View File

@ -0,0 +1,45 @@
model: meta-llama/llama-3.2-3b-instruct
label:
zh_Hans: llama-3.2-3b-instruct
en_US: llama-3.2-3b-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
- name: max_tokens
use_template: max_tokens
- name: context_length_exceeded_behavior
default: None
label:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
help:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
type: string
options:
- None
- truncate
- error
- name: response_format
use_template: response_format
pricing:
input: '0.03'
output: '0.05'
unit: '0.000001'
currency: USD

View File

@ -0,0 +1,45 @@
model: meta-llama/llama-3.2-90b-vision-instruct
label:
zh_Hans: llama-3.2-90b-vision-instruct
en_US: llama-3.2-90b-vision-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
- name: max_tokens
use_template: max_tokens
- name: context_length_exceeded_behavior
default: None
label:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
help:
zh_Hans: 上下文长度超出行为
en_US: Context Length Exceeded Behavior
type: string
options:
- None
- truncate
- error
- name: response_format
use_template: response_format
pricing:
input: '0.35'
output: '0.4'
unit: '0.000001'
currency: USD

View File

@ -0,0 +1,30 @@
model: qwen/qwen-2.5-72b-instruct
label:
en_US: qwen-2.5-72b-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
- name: max_tokens
use_template: max_tokens
type: int
default: 512
min: 1
max: 8192
help:
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
- name: top_p
use_template: top_p
- name: frequency_penalty
use_template: frequency_penalty
pricing:
input: "0.35"
output: "0.4"
unit: "0.000001"
currency: USD

View File

@ -77,7 +77,7 @@ class SageMakerText2SpeechModel(TTSModel):
"""
pass
def _detect_lang_code(self, content: str, map_dict: dict = None):
def _detect_lang_code(self, content: str, map_dict: Optional[dict] = None):
map_dict = {"zh": "<|zh|>", "en": "<|en|>", "ja": "<|jp|>", "zh-TW": "<|yue|>", "ko": "<|ko|>"}
response = self.comprehend_client.detect_dominant_language(Text=content)

View File

@ -0,0 +1,4 @@
- rerank-2
- rerank-lite-2
- rerank-1
- rerank-lite-1

View File

@ -0,0 +1,4 @@
model: rerank-2
model_type: rerank
model_properties:
context_size: 16000

View File

@ -0,0 +1,4 @@
model: rerank-lite-2
model_type: rerank
model_properties:
context_size: 8000

View File

@ -0,0 +1,6 @@
- voyage-3
- voyage-3-lite
- voyage-finance-2
- voyage-multilingual-2
- voyage-law-2
- voyage-code-2

View File

@ -0,0 +1,8 @@
model: voyage-code-2
model_type: text-embedding
model_properties:
context_size: 16000
pricing:
input: '0.00012'
unit: '0.001'
currency: USD

View File

@ -0,0 +1,8 @@
model: voyage-finance-2
model_type: text-embedding
model_properties:
context_size: 32000
pricing:
input: '0.00012'
unit: '0.001'
currency: USD

View File

@ -0,0 +1,8 @@
model: voyage-law-2
model_type: text-embedding
model_properties:
context_size: 16000
pricing:
input: '0.00012'
unit: '0.001'
currency: USD

View File

@ -0,0 +1,8 @@
model: voyage-multilingual-2
model_type: text-embedding
model_properties:
context_size: 32000
pricing:
input: '0.00012'
unit: '0.001'
currency: USD

View File

@ -64,7 +64,7 @@ class ErnieBotLargeLanguageModel(LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

View File

@ -223,6 +223,16 @@ class ZhipuAILargeLanguageModel(_CommonZhipuaiAI, LargeLanguageModel):
else:
new_prompt_messages.append(copy_prompt_message)
# zhipuai moved web_search param to tools
if "web_search" in model_parameters:
enable_web_search = model_parameters.get("web_search")
model_parameters.pop("web_search")
web_search_params = {"type": "web_search", "web_search": {"enable": enable_web_search}}
if "tools" in model_parameters:
model_parameters["tools"].append(web_search_params)
else:
model_parameters["tools"] = [web_search_params]
if model in {"glm-4v", "glm-4v-plus"}:
params = self._construct_glm_4v_parameter(model, new_prompt_messages, model_parameters)
else:

View File

@ -41,8 +41,8 @@ class Assistant(BaseAPI):
conversation_id: Optional[str] = None,
attachments: Optional[list[assistant_create_params.AssistantAttachments]] = None,
metadata: dict | None = None,
request_id: str = None,
user_id: str = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
@ -72,9 +72,9 @@ class Assistant(BaseAPI):
def query_support(
self,
*,
assistant_id_list: list[str] = None,
request_id: str = None,
user_id: str = None,
assistant_id_list: Optional[list[str]] = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
@ -99,8 +99,8 @@ class Assistant(BaseAPI):
page: int = 1,
page_size: int = 10,
*,
request_id: str = None,
user_id: str = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

View File

@ -1,7 +1,7 @@
from __future__ import annotations
from collections.abc import Mapping
from typing import TYPE_CHECKING, Literal, cast
from typing import TYPE_CHECKING, Literal, Optional, cast
import httpx
@ -34,11 +34,11 @@ class Files(BaseAPI):
def create(
self,
*,
file: FileTypes = None,
upload_detail: list[UploadDetail] = None,
file: Optional[FileTypes] = None,
upload_detail: Optional[list[UploadDetail]] = None,
purpose: Literal["fine-tune", "retrieval", "batch"],
knowledge_id: str = None,
sentence_size: int = None,
knowledge_id: Optional[str] = None,
sentence_size: Optional[int] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

View File

@ -34,12 +34,12 @@ class Document(BaseAPI):
def create(
self,
*,
file: FileTypes = None,
file: Optional[FileTypes] = None,
custom_separator: Optional[list[str]] = None,
upload_detail: list[UploadDetail] = None,
upload_detail: Optional[list[UploadDetail]] = None,
purpose: Literal["retrieval"],
knowledge_id: str = None,
sentence_size: int = None,
knowledge_id: Optional[str] = None,
sentence_size: Optional[int] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

View File

@ -31,11 +31,11 @@ class Videos(BaseAPI):
self,
model: str,
*,
prompt: str = None,
image_url: str = None,
prompt: Optional[str] = None,
image_url: Optional[str] = None,
sensitive_word_check: Optional[SensitiveWordCheckRequest] | NotGiven = NOT_GIVEN,
request_id: str = None,
user_id: str = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

View File

@ -159,6 +159,16 @@ class LangFuseDataTrace(BaseTraceInstance):
"status": status,
}
)
process_data = json.loads(node_execution.process_data) if node_execution.process_data else {}
model_provider = process_data.get("model_provider", None)
model_name = process_data.get("model_name", None)
if model_provider is not None and model_name is not None:
metadata.update(
{
"model_provider": model_provider,
"model_name": model_name,
}
)
# add span
if trace_info.message_id:
@ -191,7 +201,6 @@ class LangFuseDataTrace(BaseTraceInstance):
self.add_span(langfuse_span_data=span_data)
process_data = json.loads(node_execution.process_data) if node_execution.process_data else {}
if process_data and process_data.get("model_mode") == "chat":
total_token = metadata.get("total_tokens", 0)
# add generation

View File

@ -162,7 +162,7 @@ class RelytVector(BaseVector):
else:
return None
def delete_by_uuids(self, ids: list[str] = None):
def delete_by_uuids(self, ids: Optional[list[str]] = None):
"""Delete by vector IDs.
Args:

View File

@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Any
from typing import Any, Optional
from configs import dify_config
from core.embedding.cached_embedding import CacheEmbedding
@ -25,7 +25,7 @@ class AbstractVectorFactory(ABC):
class Vector:
def __init__(self, dataset: Dataset, attributes: list = None):
def __init__(self, dataset: Dataset, attributes: Optional[list] = None):
if attributes is None:
attributes = ["doc_id", "dataset_id", "document_id", "doc_hash"]
self._dataset = dataset
@ -106,7 +106,7 @@ class Vector:
case _:
raise ValueError(f"Vector store {vector_type} is not supported.")
def create(self, texts: list = None, **kwargs):
def create(self, texts: Optional[list] = None, **kwargs):
if texts:
embeddings = self._embeddings.embed_documents([document.page_content for document in texts])
self._vector_processor.create(texts=texts, embeddings=embeddings, **kwargs)

View File

@ -1,7 +1,7 @@
import re
import tempfile
from pathlib import Path
from typing import Union
from typing import Optional, Union
from urllib.parse import unquote
from configs import dify_config
@ -84,7 +84,7 @@ class ExtractProcessor:
@classmethod
def extract(
cls, extract_setting: ExtractSetting, is_automatic: bool = False, file_path: str = None
cls, extract_setting: ExtractSetting, is_automatic: bool = False, file_path: Optional[str] = None
) -> list[Document]:
if extract_setting.datasource_type == DatasourceType.FILE.value:
with tempfile.TemporaryDirectory() as temp_dir:

View File

@ -1,4 +1,5 @@
import logging
from typing import Optional
from core.rag.extractor.extractor_base import BaseExtractor
from core.rag.models.document import Document
@ -17,7 +18,7 @@ class UnstructuredEpubExtractor(BaseExtractor):
def __init__(
self,
file_path: str,
api_url: str = None,
api_url: Optional[str] = None,
):
"""Initialize with file path."""
self._file_path = file_path

View File

@ -341,7 +341,7 @@ class ToolRuntimeVariablePool(BaseModel):
self.pool.append(variable)
def set_file(self, tool_name: str, value: str, name: str = None) -> None:
def set_file(self, tool_name: str, value: str, name: Optional[str] = None) -> None:
"""
set an image variable

View File

@ -5,31 +5,41 @@
- searchapi
- serper
- searxng
- websearch
- tavily
- stackexchange
- pubmed
- arxiv
- aws
- nominatim
- devdocs
- spider
- firecrawl
- brave
- crossref
- jina
- webscraper
- dalle
- azuredalle
- stability
- wikipedia
- nominatim
- yahoo
- alphavantage
- arxiv
- pubmed
- stablediffusion
- webscraper
- jina
- aippt
- youtube
- code
- wolframalpha
- maths
- github
- chart
- time
- vectorizer
- cogview
- comfyui
- getimgai
- siliconflow
- spark
- stepfun
- xinference
- alphavantage
- yahoo
- openweather
- gaode
- wecom
- qrcode
- aippt
- chart
- youtube
- did
- dingtalk
- discord
- feishu
- feishu_base
- feishu_document
@ -39,4 +49,24 @@
- feishu_calendar
- feishu_spreadsheet
- slack
- twilio
- wecom
- wikipedia
- code
- wolframalpha
- maths
- github
- gitlab
- time
- vectorizer
- qrcode
- tianditu
- google_translate
- hap
- json_process
- judge0ce
- novitaai
- onebot
- regex
- trello
- vanna

View File

@ -1,6 +1,6 @@
import json
from enum import Enum
from typing import Any, Union
from typing import Any, Optional, Union
import boto3
@ -21,7 +21,7 @@ class SageMakerTTSTool(BuiltinTool):
s3_client: Any = None
comprehend_client: Any = None
def _detect_lang_code(self, content: str, map_dict: dict = None):
def _detect_lang_code(self, content: str, map_dict: Optional[dict] = None):
map_dict = {"zh": "<|zh|>", "en": "<|en|>", "ja": "<|jp|>", "zh-TW": "<|yue|>", "ko": "<|ko|>"}
response = self.comprehend_client.detect_dominant_language(Text=content)

View File

@ -0,0 +1,7 @@
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Uploaded to: SVG Repo, www.svgrepo.com, Transformed by: SVG Repo Mixer Tools -->
<svg width="80px" height="80px" viewBox="0 -28.5 256 256" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid" fill="#000000">
<g id="SVGRepo_bgCarrier" stroke-width="0"/>
<g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"/>
<g id="SVGRepo_iconCarrier"> <g> <path d="M216.856339,16.5966031 C200.285002,8.84328665 182.566144,3.2084988 164.041564,0 C161.766523,4.11318106 159.108624,9.64549908 157.276099,14.0464379 C137.583995,11.0849896 118.072967,11.0849896 98.7430163,14.0464379 C96.9108417,9.64549908 94.1925838,4.11318106 91.8971895,0 C73.3526068,3.2084988 55.6133949,8.86399117 39.0420583,16.6376612 C5.61752293,67.146514 -3.4433191,116.400813 1.08711069,164.955721 C23.2560196,181.510915 44.7403634,191.567697 65.8621325,198.148576 C71.0772151,190.971126 75.7283628,183.341335 79.7352139,175.300261 C72.104019,172.400575 64.7949724,168.822202 57.8887866,164.667963 C59.7209612,163.310589 61.5131304,161.891452 63.2445898,160.431257 C105.36741,180.133187 151.134928,180.133187 192.754523,160.431257 C194.506336,161.891452 196.298154,163.310589 198.110326,164.667963 C191.183787,168.842556 183.854737,172.420929 176.223542,175.320965 C180.230393,183.341335 184.861538,190.991831 190.096624,198.16893 C211.238746,191.588051 232.743023,181.531619 254.911949,164.955721 C260.227747,108.668201 245.831087,59.8662432 216.856339,16.5966031 Z M85.4738752,135.09489 C72.8290281,135.09489 62.4592217,123.290155 62.4592217,108.914901 C62.4592217,94.5396472 72.607595,82.7145587 85.4738752,82.7145587 C98.3405064,82.7145587 108.709962,94.5189427 108.488529,108.914901 C108.508531,123.290155 98.3405064,135.09489 85.4738752,135.09489 Z M170.525237,135.09489 C157.88039,135.09489 147.510584,123.290155 147.510584,108.914901 C147.510584,94.5396472 157.658606,82.7145587 170.525237,82.7145587 C183.391518,82.7145587 193.761324,94.5189427 193.539891,108.914901 C193.539891,123.290155 183.391518,135.09489 170.525237,135.09489 Z" fill="#5865F2" fill-rule="nonzero"> </path> </g> </g>
</svg>

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

@ -0,0 +1,9 @@
from typing import Any
from core.tools.provider.builtin.discord.tools.discord_webhook import DiscordWebhookTool
from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
class DiscordProvider(BuiltinToolProviderController):
def _validate_credentials(self, credentials: dict[str, Any]) -> None:
DiscordWebhookTool()

View File

@ -0,0 +1,16 @@
identity:
author: Ice Yao
name: discord
label:
en_US: Discord
zh_Hans: Discord
pt_BR: Discord
description:
en_US: Discord Webhook
zh_Hans: Discord Webhook
pt_BR: Discord Webhook
icon: icon.svg
tags:
- social
- productivity
credentials_for_provider:

View File

@ -0,0 +1,49 @@
from typing import Any, Union
import httpx
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.tool.builtin_tool import BuiltinTool
class DiscordWebhookTool(BuiltinTool):
def _invoke(
self, user_id: str, tool_parameters: dict[str, Any]
) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
"""
Incoming Webhooks
API Document:
https://discord.com/developers/docs/resources/webhook#execute-webhook
"""
content = tool_parameters.get("content", "")
if not content:
return self.create_text_message("Invalid parameter content")
webhook_url = tool_parameters.get("webhook_url", "")
if not webhook_url.startswith("https://discord.com/api/webhooks/"):
return self.create_text_message(
f"Invalid parameter webhook_url ${webhook_url}, \
not a valid Discord webhook URL"
)
headers = {
"Content-Type": "application/json",
}
params = {}
payload = {
"content": content,
}
try:
res = httpx.post(webhook_url, headers=headers, params=params, json=payload)
if res.is_success:
return self.create_text_message("Text message was sent successfully")
else:
return self.create_text_message(
f"Failed to send the text message, \
status code: {res.status_code}, response: {res.text}"
)
except Exception as e:
return self.create_text_message("Failed to send message through webhook. {}".format(e))

View File

@ -0,0 +1,40 @@
identity:
name: discord_webhook
author: Ice Yao
label:
en_US: Incoming Webhook to send message
zh_Hans: 通过入站Webhook发送消息
pt_BR: Incoming Webhook to send message
icon: icon.svg
description:
human:
en_US: Sending a message on Discord via the Incoming Webhook
zh_Hans: 通过入站Webhook在Discord上发送消息
pt_BR: Sending a message on Discord via the Incoming Webhook
llm: A tool for sending messages to a chat on Discord.
parameters:
- name: webhook_url
type: string
required: true
label:
en_US: Discord Incoming Webhook url
zh_Hans: Discord入站Webhook的url
pt_BR: Discord Incoming Webhook url
human_description:
en_US: Discord Incoming Webhook url
zh_Hans: Discord入站Webhook的url
pt_BR: Discord Incoming Webhook url
form: form
- name: content
type: string
required: true
label:
en_US: content
zh_Hans: 消息内容
pt_BR: content
human_description:
en_US: Content to sent to the channel or person.
zh_Hans: 消息内容文本
pt_BR: Content to sent to the channel or person.
llm_description: Content of the message
form: llm

File diff suppressed because it is too large Load Diff

View File

@ -32,16 +32,17 @@ class StepfunTool(BuiltinTool):
prompt = tool_parameters.get("prompt", "")
if not prompt:
return self.create_text_message("Please input prompt")
if len(prompt) > 1024:
return self.create_text_message("The prompt length should less than 1024")
seed = tool_parameters.get("seed", 0)
if seed > 0:
extra_body["seed"] = seed
steps = tool_parameters.get("steps", 0)
steps = tool_parameters.get("steps", 50)
if steps > 0:
extra_body["steps"] = steps
negative_prompt = tool_parameters.get("negative_prompt", "")
if negative_prompt:
extra_body["negative_prompt"] = negative_prompt
cfg_scale = tool_parameters.get("cfg_scale", 7.5)
if cfg_scale > 0:
extra_body["cfg_scale"] = cfg_scale
# call openapi stepfun model
response = client.images.generate(
@ -51,7 +52,6 @@ class StepfunTool(BuiltinTool):
n=tool_parameters.get("n", 1),
extra_body=extra_body,
)
print(response)
result = []
for image in response.data:

View File

@ -33,9 +33,9 @@ parameters:
type: select
required: false
human_description:
en_US: used for selecting the image size
zh_Hans: 用于选择图像大小
pt_BR: used for selecting the image size
en_US: The size of the generated image
zh_Hans: 生成的图片大小
pt_BR: The size of the generated image
label:
en_US: Image size
zh_Hans: 图像大小
@ -77,17 +77,17 @@ parameters:
type: number
required: true
human_description:
en_US: used for selecting the number of images
zh_Hans: 用于选择图像数量
pt_BR: used for selecting the number of images
en_US: Number of generated images, now only one image can be generated at a time
zh_Hans: 生成的图像数量,当前仅支持每次生成一张图片
pt_BR: Number of generated images, now only one image can be generated at a time
label:
en_US: Number of images
zh_Hans: 图像数量
pt_BR: Number of images
en_US: Number of generated images
zh_Hans: 生成的图像数量
pt_BR: Number of generated images
form: form
default: 1
min: 1
max: 10
max: 1
- name: seed
type: number
required: false
@ -109,21 +109,25 @@ parameters:
zh_Hans: Steps
pt_BR: Steps
human_description:
en_US: Steps
zh_Hans: Steps
pt_BR: Steps
en_US: Steps, now support integers between 1 and 100
zh_Hans: Steps, 当前支持 1100 之间整数
pt_BR: Steps, now support integers between 1 and 100
form: form
default: 10
- name: negative_prompt
type: string
default: 50
min: 1
max: 100
- name: cfg_scale
type: number
required: false
label:
en_US: Negative prompt
zh_Hans: Negative prompt
pt_BR: Negative prompt
en_US: classifier-free guidance scale
zh_Hans: classifier-free guidance scale
pt_BR: classifier-free guidance scale
human_description:
en_US: Negative prompt
zh_Hans: Negative prompt
pt_BR: Negative prompt
en_US: classifier-free guidance scale
zh_Hans: classifier-free guidance scale
pt_BR: classifier-free guidance scale
form: form
default: (worst quality:1.3), (nsfw), low quality
default: 7.5
min: 1
max: 10

View File

@ -0,0 +1,44 @@
from datetime import datetime
from typing import Any, Union
import pytz
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.errors import ToolInvokeError
from core.tools.tool.builtin_tool import BuiltinTool
class LocaltimeToTimestampTool(BuiltinTool):
def _invoke(
self,
user_id: str,
tool_parameters: dict[str, Any],
) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
"""
Convert localtime to timestamp
"""
localtime = tool_parameters.get("localtime")
timezone = tool_parameters.get("timezone", "Asia/Shanghai")
if not timezone:
timezone = None
time_format = "%Y-%m-%d %H:%M:%S"
timestamp = self.localtime_to_timestamp(localtime, time_format, timezone)
if not timestamp:
return self.create_text_message(f"Invalid localtime: {localtime}")
return self.create_text_message(f"{timestamp}")
@staticmethod
def localtime_to_timestamp(localtime: str, time_format: str, local_tz=None) -> int | None:
try:
if local_tz is None:
local_tz = datetime.now().astimezone().tzinfo
if isinstance(local_tz, str):
local_tz = pytz.timezone(local_tz)
local_time = datetime.strptime(localtime, time_format)
localtime = local_tz.localize(local_time)
timestamp = int(localtime.timestamp())
return timestamp
except Exception as e:
raise ToolInvokeError(str(e))

View File

@ -0,0 +1,33 @@
identity:
name: localtime_to_timestamp
author: zhuhao
label:
en_US: localtime to timestamp
zh_Hans: 获取时间戳
description:
human:
en_US: A tool for localtime convert to timestamp
zh_Hans: 获取时间戳
llm: A tool for localtime convert to timestamp
parameters:
- name: localtime
type: string
required: true
form: llm
label:
en_US: localtime
zh_Hans: 本地时间
human_description:
en_US: localtime, such as 2024-1-1 0:0:0
zh_Hans: 本地时间, 比如2024-1-1 0:0:0
- name: timezone
type: string
required: false
form: llm
label:
en_US: Timezone
zh_Hans: 时区
human_description:
en_US: Timezone, such as Asia/Shanghai
zh_Hans: 时区, 比如Asia/Shanghai
default: Asia/Shanghai

View File

@ -0,0 +1,44 @@
from datetime import datetime
from typing import Any, Union
import pytz
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.errors import ToolInvokeError
from core.tools.tool.builtin_tool import BuiltinTool
class TimestampToLocaltimeTool(BuiltinTool):
def _invoke(
self,
user_id: str,
tool_parameters: dict[str, Any],
) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
"""
Convert timestamp to localtime
"""
timestamp = tool_parameters.get("timestamp")
timezone = tool_parameters.get("timezone", "Asia/Shanghai")
if not timezone:
timezone = None
time_format = "%Y-%m-%d %H:%M:%S"
locatime = self.timestamp_to_localtime(timestamp, timezone)
if not locatime:
return self.create_text_message(f"Invalid timestamp: {timestamp}")
localtime_format = locatime.strftime(time_format)
return self.create_text_message(f"{localtime_format}")
@staticmethod
def timestamp_to_localtime(timestamp: int, local_tz=None) -> datetime | None:
try:
if local_tz is None:
local_tz = datetime.now().astimezone().tzinfo
if isinstance(local_tz, str):
local_tz = pytz.timezone(local_tz)
local_time = datetime.fromtimestamp(timestamp, local_tz)
return local_time
except Exception as e:
raise ToolInvokeError(str(e))

View File

@ -0,0 +1,33 @@
identity:
name: timestamp_to_localtime
author: zhuhao
label:
en_US: Timestamp to localtime
zh_Hans: 时间戳转换
description:
human:
en_US: A tool for timestamp convert to localtime
zh_Hans: 时间戳转换
llm: A tool for timestamp convert to localtime
parameters:
- name: timestamp
type: number
required: true
form: llm
label:
en_US: Timestamp
zh_Hans: 时间戳
human_description:
en_US: Timestamp
zh_Hans: 时间戳
- name: timezone
type: string
required: false
form: llm
label:
en_US: Timezone
zh_Hans: 时区
human_description:
en_US: Timezone, such as Asia/Shanghai
zh_Hans: 时区, 比如Asia/Shanghai
default: Asia/Shanghai

View File

@ -8,6 +8,9 @@ identity:
en_US: The fastest way to get actionable insights from your database just by asking questions.
zh_Hans: 一个基于大模型和RAG的Text2SQL工具。
icon: icon.png
tags:
- utilities
- productivity
credentials_for_provider:
api_key:
type: secret-input

View File

@ -104,14 +104,15 @@ class StableDiffusionTool(BuiltinTool):
model = self.runtime.credentials.get("model", None)
if not model:
return self.create_text_message("Please input model")
api_key = self.runtime.credentials.get("api_key") or "abc"
headers = {"Authorization": f"Bearer {api_key}"}
# set model
try:
url = str(URL(base_url) / "sdapi" / "v1" / "options")
response = post(
url,
json={"sd_model_checkpoint": model},
headers={"Authorization": f"Bearer {self.runtime.credentials['api_key']}"},
headers=headers,
)
if response.status_code != 200:
raise ToolProviderCredentialValidationError("Failed to set model, please tell user to set model")
@ -257,14 +258,15 @@ class StableDiffusionTool(BuiltinTool):
draw_options["prompt"] = f"{lora},{prompt}"
else:
draw_options["prompt"] = prompt
api_key = self.runtime.credentials.get("api_key") or "abc"
headers = {"Authorization": f"Bearer {api_key}"}
try:
url = str(URL(base_url) / "sdapi" / "v1" / "img2img")
response = post(
url,
json=draw_options,
timeout=120,
headers={"Authorization": f"Bearer {self.runtime.credentials['api_key']}"},
headers=headers,
)
if response.status_code != 200:
return self.create_text_message("Failed to generate image")
@ -298,14 +300,15 @@ class StableDiffusionTool(BuiltinTool):
else:
draw_options["prompt"] = prompt
draw_options["override_settings"]["sd_model_checkpoint"] = model
api_key = self.runtime.credentials.get("api_key") or "abc"
headers = {"Authorization": f"Bearer {api_key}"}
try:
url = str(URL(base_url) / "sdapi" / "v1" / "txt2img")
response = post(
url,
json=draw_options,
timeout=120,
headers={"Authorization": f"Bearer {self.runtime.credentials['api_key']}"},
headers=headers,
)
if response.status_code != 200:
return self.create_text_message("Failed to generate image")

View File

@ -6,12 +6,18 @@ from core.tools.provider.builtin_tool_provider import BuiltinToolProviderControl
class XinferenceProvider(BuiltinToolProviderController):
def _validate_credentials(self, credentials: dict) -> None:
base_url = credentials.get("base_url")
api_key = credentials.get("api_key")
model = credentials.get("model")
base_url = credentials.get("base_url", "").removesuffix("/")
api_key = credentials.get("api_key", "")
if not api_key:
api_key = "abc"
credentials["api_key"] = api_key
model = credentials.get("model", "")
if not base_url or not model:
raise ToolProviderCredentialValidationError("Xinference base_url and model is required")
headers = {"Authorization": f"Bearer {api_key}"}
res = requests.post(
f"{base_url}/sdapi/v1/options",
headers={"Authorization": f"Bearer {api_key}"},
headers=headers,
json={"sd_model_checkpoint": model},
)
if res.status_code != 200:

View File

@ -31,7 +31,7 @@ credentials_for_provider:
zh_Hans: 请输入你的模型名称
api_key:
type: secret-input
required: true
required: false
label:
en_US: API Key
zh_Hans: Xinference 服务器的 API Key

View File

@ -1,3 +1,5 @@
from typing import Optional
from core.model_runtime.entities.llm_entities import LLMResult
from core.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage
from core.tools.entities.tool_entities import ToolProviderType
@ -124,7 +126,7 @@ class BuiltinTool(Tool):
return result
def get_url(self, url: str, user_agent: str = None) -> str:
def get_url(self, url: str, user_agent: Optional[str] = None) -> str:
"""
get url
"""

View File

@ -318,7 +318,7 @@ class Tool(BaseModel, ABC):
"""
return ToolInvokeMessage(type=ToolInvokeMessage.MessageType.TEXT, message=text, save_as=save_as)
def create_blob_message(self, blob: bytes, meta: dict = None, save_as: str = "") -> ToolInvokeMessage:
def create_blob_message(self, blob: bytes, meta: Optional[dict] = None, save_as: str = "") -> ToolInvokeMessage:
"""
create a blob message

View File

@ -68,10 +68,13 @@ class WorkflowTool(Tool):
result = []
outputs = data.get("outputs", {})
outputs, files = self._extract_files(outputs)
for file in files:
result.append(self.create_file_var_message(file))
outputs = data.get("outputs")
if outputs == None:
outputs = {}
else:
outputs, files = self._extract_files(outputs)
for file in files:
result.append(self.create_file_var_message(file))
result.append(self.create_text_message(json.dumps(outputs, ensure_ascii=False)))
result.append(self.create_json_message(outputs))

View File

@ -4,7 +4,7 @@ import mimetypes
from collections.abc import Generator
from os import listdir, path
from threading import Lock
from typing import Any, Union
from typing import Any, Optional, Union
from configs import dify_config
from core.agent.entities import AgentToolEntity
@ -72,7 +72,7 @@ class ToolManager:
@classmethod
def get_tool(
cls, provider_type: str, provider_id: str, tool_name: str, tenant_id: str = None
cls, provider_type: str, provider_id: str, tool_name: str, tenant_id: Optional[str] = None
) -> Union[BuiltinTool, ApiTool]:
"""
get the tool

View File

@ -1,3 +1,5 @@
from typing import Optional
import httpx
from core.tools.errors import ToolProviderCredentialValidationError
@ -32,7 +34,12 @@ class FeishuRequest:
return res.get("tenant_access_token")
def _send_request(
self, url: str, method: str = "post", require_token: bool = True, payload: dict = None, params: dict = None
self,
url: str,
method: str = "post",
require_token: bool = True,
payload: Optional[dict] = None,
params: Optional[dict] = None,
):
headers = {
"Content-Type": "application/json",

View File

@ -3,6 +3,7 @@ import uuid
from json import dumps as json_dumps
from json import loads as json_loads
from json.decoder import JSONDecodeError
from typing import Optional
from requests import get
from yaml import YAMLError, safe_load
@ -16,7 +17,7 @@ from core.tools.errors import ToolApiSchemaError, ToolNotSupportedError, ToolPro
class ApiBasedToolSchemaParser:
@staticmethod
def parse_openapi_to_tool_bundle(
openapi: dict, extra_info: dict = None, warning: dict = None
openapi: dict, extra_info: Optional[dict], warning: Optional[dict]
) -> list[ApiToolBundle]:
warning = warning if warning is not None else {}
extra_info = extra_info if extra_info is not None else {}
@ -174,7 +175,7 @@ class ApiBasedToolSchemaParser:
@staticmethod
def parse_openapi_yaml_to_tool_bundle(
yaml: str, extra_info: dict = None, warning: dict = None
yaml: str, extra_info: Optional[dict], warning: Optional[dict]
) -> list[ApiToolBundle]:
"""
parse openapi yaml to tool bundle
@ -191,7 +192,7 @@ class ApiBasedToolSchemaParser:
return ApiBasedToolSchemaParser.parse_openapi_to_tool_bundle(openapi, extra_info=extra_info, warning=warning)
@staticmethod
def parse_swagger_to_openapi(swagger: dict, extra_info: dict = None, warning: dict = None) -> dict:
def parse_swagger_to_openapi(swagger: dict, extra_info: Optional[dict], warning: Optional[dict]) -> dict:
"""
parse swagger to openapi
@ -253,7 +254,7 @@ class ApiBasedToolSchemaParser:
@staticmethod
def parse_openai_plugin_json_to_tool_bundle(
json: str, extra_info: dict = None, warning: dict = None
json: str, extra_info: Optional[dict], warning: Optional[dict]
) -> list[ApiToolBundle]:
"""
parse openapi plugin yaml to tool bundle
@ -287,7 +288,7 @@ class ApiBasedToolSchemaParser:
@staticmethod
def auto_parse_to_tool_bundle(
content: str, extra_info: dict = None, warning: dict = None
content: str, extra_info: Optional[dict], warning: Optional[dict]
) -> tuple[list[ApiToolBundle], str]:
"""
auto parse to tool bundle

View File

@ -9,6 +9,7 @@ import tempfile
import unicodedata
from contextlib import contextmanager
from pathlib import Path
from typing import Optional
from urllib.parse import unquote
import chardet
@ -36,7 +37,7 @@ def page_result(text: str, cursor: int, max_length: int) -> str:
return text[cursor : cursor + max_length]
def get_url(url: str, user_agent: str = None) -> str:
def get_url(url: str, user_agent: Optional[str] = None) -> str:
"""Fetch URL and return the contents as a string."""
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)"

Some files were not shown because too many files have changed in this diff Show More