diff --git a/CHANGELOG.md b/CHANGELOG.md
index 3fad0cd24..5c0c4de1a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,26 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [0.6.7] - 2025-05-07
+
+### Added
+
+- 🌐 **Custom Azure TTS API URL Support Added**: You can now define a custom Azure Text-to-Speech endpoint—enabling flexibility for enterprise deployments and regional compliance.
+- ⚙️ **TOOL_SERVER_CONNECTIONS Environment Variable Suppor**: Easily configure and deploy tool servers via environment variables, streamlining setup and enabling faster enterprise provisioning.
+- 👥 **Enhanced OAuth Group Handling as String or List**: OAuth group data can now be passed as either a list or a comma-separated string, improving compatibility with varied identity provider formats and reducing onboarding friction.
+
+### Fixed
+
+- 🧠 **Embedding with Ollama Proxy Endpoints Restored**: Fixed an issue where missing API config broke embedding for proxied Ollama models—ensuring consistent performance and compatibility.
+- 🔐 **OIDC OAuth Login Issue Resolved**: Users can once again sign in seamlessly using OpenID Connect-based OAuth, eliminating login interruptions and improving reliability.
+- 📝 **Notes Feature Access Fixed for Non-Admins**: Fixed an issue preventing non-admin users from accessing the Notes feature, restoring full cross-role collaboration capabilities.
+- 🖼️ **Tika Loader Image Extraction Problem Resolved**: Ensured TikaLoader now processes 'extract_images' parameter correctly, restoring complete file extraction functionality in document workflows.
+- 🎨 **Automatic1111 Image Model Setting Applied Properly**: Fixed an issue where switching to a specific image model via the UI wasn’t reflected in generation, re-enabling full visual creativity control.
+- 🏷️ **Multiple XML Tags in Messages Now Parsed Correctly**: Fixed parsing issues when messages included multiple XML-style tags, ensuring clean and unbroken rendering of rich content in chats.
+- 🖌️ **OpenAI Image Generation Issues Resolved**: Resolved broken image output when using OpenAI’s image generation, ensuring fully functional visual creation workflows.
+- 🔎 **Tool Server Settings UI Privacy Restored**: Prevented restricted users from accessing tool server settings via search—restoring tight permissions control and safeguarding sensitive configurations.
+- 🎧 **WebM Audio Transcription Now Supported**: Fixed an issue where WebM files failed during audio transcription—these formats are now fully supported, ensuring smoother voice note workflows and broader file compatibility.
+
## [0.6.6] - 2025-05-05
### Added
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index eb54b4894..59285aa42 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -2,13 +2,13 @@
## Our Pledge
-As members, contributors, and leaders of this community, we pledge to make participation in our open-source project a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
+As members, contributors, and leaders of this community, we pledge to make participation in our project a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We are committed to creating and maintaining an open, respectful, and professional environment where positive contributions and meaningful discussions can flourish. By participating in this project, you agree to uphold these values and align your behavior to the standards outlined in this Code of Conduct.
## Why These Standards Are Important
-Open-source projects rely on a community of volunteers dedicating their time, expertise, and effort toward a shared goal. These projects are inherently collaborative but also fragile, as the success of the project depends on the goodwill, energy, and productivity of those involved.
+Projects rely on a community of volunteers dedicating their time, expertise, and effort toward a shared goal. These projects are inherently collaborative but also fragile, as the success of the project depends on the goodwill, energy, and productivity of those involved.
Maintaining a positive and respectful environment is essential to safeguarding the integrity of this project and protecting contributors' efforts. Behavior that disrupts this atmosphere—whether through hostility, entitlement, or unprofessional conduct—can severely harm the morale and productivity of the community. **Strict enforcement of these standards ensures a safe and supportive space for meaningful collaboration.**
@@ -79,7 +79,7 @@ This approach ensures that disruptive behaviors are addressed swiftly and decisi
## Why Zero Tolerance Is Necessary
-Open-source projects thrive on collaboration, goodwill, and mutual respect. Toxic behaviors—such as entitlement, hostility, or persistent negativity—threaten not just individual contributors but the health of the project as a whole. Allowing such behaviors to persist robs contributors of their time, energy, and enthusiasm for the work they do.
+Projects thrive on collaboration, goodwill, and mutual respect. Toxic behaviors—such as entitlement, hostility, or persistent negativity—threaten not just individual contributors but the health of the project as a whole. Allowing such behaviors to persist robs contributors of their time, energy, and enthusiasm for the work they do.
By enforcing a zero-tolerance policy, we ensure that the community remains a safe, welcoming space for all participants. These measures are not about harshness—they are about protecting contributors and fostering a productive environment where innovation can thrive.
diff --git a/Caddyfile.localhost b/Caddyfile.localhost
deleted file mode 100644
index 80728eedf..000000000
--- a/Caddyfile.localhost
+++ /dev/null
@@ -1,64 +0,0 @@
-# Run with
-# caddy run --envfile ./example.env --config ./Caddyfile.localhost
-#
-# This is configured for
-# - Automatic HTTPS (even for localhost)
-# - Reverse Proxying to Ollama API Base URL (http://localhost:11434/api)
-# - CORS
-# - HTTP Basic Auth API Tokens (uncomment basicauth section)
-
-
-# CORS Preflight (OPTIONS) + Request (GET, POST, PATCH, PUT, DELETE)
-(cors-api) {
- @match-cors-api-preflight method OPTIONS
- handle @match-cors-api-preflight {
- header {
- Access-Control-Allow-Origin "{http.request.header.origin}"
- Access-Control-Allow-Methods "GET, POST, PUT, PATCH, DELETE, OPTIONS"
- Access-Control-Allow-Headers "Origin, Accept, Authorization, Content-Type, X-Requested-With"
- Access-Control-Allow-Credentials "true"
- Access-Control-Max-Age "3600"
- defer
- }
- respond "" 204
- }
-
- @match-cors-api-request {
- not {
- header Origin "{http.request.scheme}://{http.request.host}"
- }
- header Origin "{http.request.header.origin}"
- }
- handle @match-cors-api-request {
- header {
- Access-Control-Allow-Origin "{http.request.header.origin}"
- Access-Control-Allow-Methods "GET, POST, PUT, PATCH, DELETE, OPTIONS"
- Access-Control-Allow-Headers "Origin, Accept, Authorization, Content-Type, X-Requested-With"
- Access-Control-Allow-Credentials "true"
- Access-Control-Max-Age "3600"
- defer
- }
- }
-}
-
-# replace localhost with example.com or whatever
-localhost {
- ## HTTP Basic Auth
- ## (uncomment to enable)
- # basicauth {
- # # see .example.env for how to generate tokens
- # {env.OLLAMA_API_ID} {env.OLLAMA_API_TOKEN_DIGEST}
- # }
-
- handle /api/* {
- # Comment to disable CORS
- import cors-api
-
- reverse_proxy localhost:11434
- }
-
- # Same-Origin Static Web Server
- file_server {
- root ./build/
- }
-}
diff --git a/backend/open_webui/config.py b/backend/open_webui/config.py
index a6cffeecd..5c617f190 100644
--- a/backend/open_webui/config.py
+++ b/backend/open_webui/config.py
@@ -921,11 +921,19 @@ OPENAI_API_BASE_URL = "https://api.openai.com/v1"
# TOOL_SERVERS
####################################
+try:
+ tool_server_connections = json.loads(
+ os.environ.get("TOOL_SERVER_CONNECTIONS", "[]")
+ )
+except Exception as e:
+ log.exception(f"Error loading TOOL_SERVER_CONNECTIONS: {e}")
+ tool_server_connections = []
+
TOOL_SERVER_CONNECTIONS = PersistentConfig(
"TOOL_SERVER_CONNECTIONS",
"tool_server.connections",
- [],
+ tool_server_connections,
)
####################################
@@ -1002,6 +1010,7 @@ if default_prompt_suggestions == []:
"content": "Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it?",
},
]
+
DEFAULT_PROMPT_SUGGESTIONS = PersistentConfig(
"DEFAULT_PROMPT_SUGGESTIONS",
"ui.prompt_suggestions",
@@ -2689,7 +2698,7 @@ AUDIO_STT_AZURE_BASE_URL = PersistentConfig(
AUDIO_STT_AZURE_MAX_SPEAKERS = PersistentConfig(
"AUDIO_STT_AZURE_MAX_SPEAKERS",
"audio.stt.azure.max_speakers",
- os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", "3"),
+ os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", ""),
)
AUDIO_TTS_OPENAI_API_BASE_URL = PersistentConfig(
@@ -2737,7 +2746,13 @@ AUDIO_TTS_SPLIT_ON = PersistentConfig(
AUDIO_TTS_AZURE_SPEECH_REGION = PersistentConfig(
"AUDIO_TTS_AZURE_SPEECH_REGION",
"audio.tts.azure.speech_region",
- os.getenv("AUDIO_TTS_AZURE_SPEECH_REGION", "eastus"),
+ os.getenv("AUDIO_TTS_AZURE_SPEECH_REGION", ""),
+)
+
+AUDIO_TTS_AZURE_SPEECH_BASE_URL = PersistentConfig(
+ "AUDIO_TTS_AZURE_SPEECH_BASE_URL",
+ "audio.tts.azure.speech_base_url",
+ os.getenv("AUDIO_TTS_AZURE_SPEECH_BASE_URL", ""),
)
AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT = PersistentConfig(
diff --git a/backend/open_webui/main.py b/backend/open_webui/main.py
index db124bedd..06577d481 100644
--- a/backend/open_webui/main.py
+++ b/backend/open_webui/main.py
@@ -166,6 +166,7 @@ from open_webui.config import (
AUDIO_TTS_SPLIT_ON,
AUDIO_TTS_VOICE,
AUDIO_TTS_AZURE_SPEECH_REGION,
+ AUDIO_TTS_AZURE_SPEECH_BASE_URL,
AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT,
PLAYWRIGHT_WS_URL,
PLAYWRIGHT_TIMEOUT,
@@ -437,7 +438,7 @@ print(
╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝ ╚══╝╚══╝ ╚══════╝╚═════╝ ╚═════╝ ╚═╝
-v{VERSION} - building the best open-source AI user interface.
+v{VERSION} - building the best AI user interface.
{f"Commit: {WEBUI_BUILD_HASH}" if WEBUI_BUILD_HASH != "dev-build" else ""}
https://github.com/open-webui/open-webui
"""
@@ -852,6 +853,7 @@ app.state.config.TTS_SPLIT_ON = AUDIO_TTS_SPLIT_ON
app.state.config.TTS_AZURE_SPEECH_REGION = AUDIO_TTS_AZURE_SPEECH_REGION
+app.state.config.TTS_AZURE_SPEECH_BASE_URL = AUDIO_TTS_AZURE_SPEECH_BASE_URL
app.state.config.TTS_AZURE_SPEECH_OUTPUT_FORMAT = AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT
diff --git a/backend/open_webui/retrieval/loaders/youtube.py b/backend/open_webui/retrieval/loaders/youtube.py
index f59dd7df5..d908cc8cb 100644
--- a/backend/open_webui/retrieval/loaders/youtube.py
+++ b/backend/open_webui/retrieval/loaders/youtube.py
@@ -62,12 +62,17 @@ class YoutubeLoader:
_video_id = _parse_video_id(video_id)
self.video_id = _video_id if _video_id is not None else video_id
self._metadata = {"source": video_id}
- self.language = language
self.proxy_url = proxy_url
+
+ # Ensure language is a list
if isinstance(language, str):
self.language = [language]
else:
- self.language = language
+ self.language = list(language)
+
+ # Add English as fallback if not already in the list
+ if "en" not in self.language:
+ self.language.append("en")
def load(self) -> List[Document]:
"""Load YouTube transcripts into `Document` objects."""
@@ -101,17 +106,31 @@ class YoutubeLoader:
log.exception("Loading YouTube transcript failed")
return []
- try:
- transcript = transcript_list.find_transcript(self.language)
- except NoTranscriptFound:
- transcript = transcript_list.find_transcript(["en"])
+ # Try each language in order of priority
+ for lang in self.language:
+ try:
+ transcript = transcript_list.find_transcript([lang])
+ log.debug(f"Found transcript for language '{lang}'")
+ transcript_pieces: List[Dict[str, Any]] = transcript.fetch()
+ transcript_text = " ".join(
+ map(
+ lambda transcript_piece: transcript_piece.text.strip(" "),
+ transcript_pieces,
+ )
+ )
+ return [Document(page_content=transcript_text, metadata=self._metadata)]
+ except NoTranscriptFound:
+ log.debug(f"No transcript found for language '{lang}'")
+ continue
+ except Exception as e:
+ log.info(f"Error finding transcript for language '{lang}'")
+ raise e
- transcript_pieces: List[Dict[str, Any]] = transcript.fetch()
-
- transcript = " ".join(
- map(
- lambda transcript_piece: transcript_piece.text.strip(" "),
- transcript_pieces,
- )
+ # If we get here, all languages failed
+ languages_tried = ", ".join(self.language)
+ log.warning(
+ f"No transcript found for any of the specified languages: {languages_tried}. Verify if the video has transcripts, add more languages if needed."
+ )
+ raise NoTranscriptFound(
+ f"No transcript found for any supported language. Verify if the video has transcripts, add more languages if needed."
)
- return [Document(page_content=transcript, metadata=self._metadata)]
diff --git a/backend/open_webui/routers/audio.py b/backend/open_webui/routers/audio.py
index 0153eb10d..5952bb59c 100644
--- a/backend/open_webui/routers/audio.py
+++ b/backend/open_webui/routers/audio.py
@@ -138,6 +138,7 @@ class TTSConfigForm(BaseModel):
VOICE: str
SPLIT_ON: str
AZURE_SPEECH_REGION: str
+ AZURE_SPEECH_BASE_URL: str
AZURE_SPEECH_OUTPUT_FORMAT: str
@@ -172,6 +173,7 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
"VOICE": request.app.state.config.TTS_VOICE,
"SPLIT_ON": request.app.state.config.TTS_SPLIT_ON,
"AZURE_SPEECH_REGION": request.app.state.config.TTS_AZURE_SPEECH_REGION,
+ "AZURE_SPEECH_BASE_URL": request.app.state.config.TTS_AZURE_SPEECH_BASE_URL,
"AZURE_SPEECH_OUTPUT_FORMAT": request.app.state.config.TTS_AZURE_SPEECH_OUTPUT_FORMAT,
},
"stt": {
@@ -202,6 +204,9 @@ async def update_audio_config(
request.app.state.config.TTS_VOICE = form_data.tts.VOICE
request.app.state.config.TTS_SPLIT_ON = form_data.tts.SPLIT_ON
request.app.state.config.TTS_AZURE_SPEECH_REGION = form_data.tts.AZURE_SPEECH_REGION
+ request.app.state.config.TTS_AZURE_SPEECH_BASE_URL = (
+ form_data.tts.AZURE_SPEECH_BASE_URL
+ )
request.app.state.config.TTS_AZURE_SPEECH_OUTPUT_FORMAT = (
form_data.tts.AZURE_SPEECH_OUTPUT_FORMAT
)
@@ -235,6 +240,7 @@ async def update_audio_config(
"VOICE": request.app.state.config.TTS_VOICE,
"SPLIT_ON": request.app.state.config.TTS_SPLIT_ON,
"AZURE_SPEECH_REGION": request.app.state.config.TTS_AZURE_SPEECH_REGION,
+ "AZURE_SPEECH_BASE_URL": request.app.state.config.TTS_AZURE_SPEECH_BASE_URL,
"AZURE_SPEECH_OUTPUT_FORMAT": request.app.state.config.TTS_AZURE_SPEECH_OUTPUT_FORMAT,
},
"stt": {
@@ -406,7 +412,8 @@ async def speech(request: Request, user=Depends(get_verified_user)):
log.exception(e)
raise HTTPException(status_code=400, detail="Invalid JSON payload")
- region = request.app.state.config.TTS_AZURE_SPEECH_REGION
+ region = request.app.state.config.TTS_AZURE_SPEECH_REGION or "eastus"
+ base_url = request.app.state.config.TTS_AZURE_SPEECH_BASE_URL
language = request.app.state.config.TTS_VOICE
locale = "-".join(request.app.state.config.TTS_VOICE.split("-")[:1])
output_format = request.app.state.config.TTS_AZURE_SPEECH_OUTPUT_FORMAT
@@ -420,7 +427,8 @@ async def speech(request: Request, user=Depends(get_verified_user)):
timeout=timeout, trust_env=True
) as session:
async with session.post(
- f"https://{region}.tts.speech.microsoft.com/cognitiveservices/v1",
+ (base_url or f"https://{region}.tts.speech.microsoft.com")
+ + "/cognitiveservices/v1",
headers={
"Ocp-Apim-Subscription-Key": request.app.state.config.TTS_API_KEY,
"Content-Type": "application/ssml+xml",
@@ -651,10 +659,10 @@ def transcribe(request: Request, file_path):
)
api_key = request.app.state.config.AUDIO_STT_AZURE_API_KEY
- region = request.app.state.config.AUDIO_STT_AZURE_REGION
+ region = request.app.state.config.AUDIO_STT_AZURE_REGION or "eastus"
locales = request.app.state.config.AUDIO_STT_AZURE_LOCALES
base_url = request.app.state.config.AUDIO_STT_AZURE_BASE_URL
- max_speakers = request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS
+ max_speakers = request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS or 3
# IF NO LOCALES, USE DEFAULTS
if len(locales) < 2:
@@ -681,12 +689,6 @@ def transcribe(request: Request, file_path):
detail="Azure API key is required for Azure STT",
)
- if not base_url and not region:
- raise HTTPException(
- status_code=400,
- detail="Azure region or base url is required for Azure STT",
- )
-
r = None
try:
# Prepare the request
@@ -702,9 +704,8 @@ def transcribe(request: Request, file_path):
}
url = (
- base_url
- or f"https://{region}.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-11-15"
- )
+ base_url or f"https://{region}.api.cognitive.microsoft.com"
+ ) + "/speechtotext/transcriptions:transcribe?api-version=2024-11-15"
# Use context manager to ensure file is properly closed
with open(file_path, "rb") as audio_file:
@@ -939,7 +940,10 @@ def get_available_voices(request) -> dict:
elif request.app.state.config.TTS_ENGINE == "azure":
try:
region = request.app.state.config.TTS_AZURE_SPEECH_REGION
- url = f"https://{region}.tts.speech.microsoft.com/cognitiveservices/voices/list"
+ base_url = request.app.state.config.TTS_AZURE_SPEECH_BASE_URL
+ url = (
+ base_url or f"https://{region}.tts.speech.microsoft.com"
+ ) + "/cognitiveservices/voices/list"
headers = {
"Ocp-Apim-Subscription-Key": request.app.state.config.TTS_API_KEY
}
diff --git a/backend/open_webui/routers/auths.py b/backend/open_webui/routers/auths.py
index 5798d045b..acc456d20 100644
--- a/backend/open_webui/routers/auths.py
+++ b/backend/open_webui/routers/auths.py
@@ -82,28 +82,31 @@ async def get_session_user(
token = auth_token.credentials
data = decode_token(token)
- expires_at = data.get("exp")
+ expires_at = None
- if (expires_at is not None) and int(time.time()) > expires_at:
- raise HTTPException(
- status_code=status.HTTP_401_UNAUTHORIZED,
- detail=ERROR_MESSAGES.INVALID_TOKEN,
+ if data:
+ expires_at = data.get("exp")
+
+ if (expires_at is not None) and int(time.time()) > expires_at:
+ raise HTTPException(
+ status_code=status.HTTP_401_UNAUTHORIZED,
+ detail=ERROR_MESSAGES.INVALID_TOKEN,
+ )
+
+ # Set the cookie token
+ response.set_cookie(
+ key="token",
+ value=token,
+ expires=(
+ datetime.datetime.fromtimestamp(expires_at, datetime.timezone.utc)
+ if expires_at
+ else None
+ ),
+ httponly=True, # Ensures the cookie is not accessible via JavaScript
+ samesite=WEBUI_AUTH_COOKIE_SAME_SITE,
+ secure=WEBUI_AUTH_COOKIE_SECURE,
)
- # Set the cookie token
- response.set_cookie(
- key="token",
- value=token,
- expires=(
- datetime.datetime.fromtimestamp(expires_at, datetime.timezone.utc)
- if expires_at
- else None
- ),
- httponly=True, # Ensures the cookie is not accessible via JavaScript
- samesite=WEBUI_AUTH_COOKIE_SAME_SITE,
- secure=WEBUI_AUTH_COOKIE_SECURE,
- )
-
user_permissions = get_permissions(
user.id, request.app.state.config.USER_PERMISSIONS
)
diff --git a/backend/open_webui/routers/images.py b/backend/open_webui/routers/images.py
index 5d2f9809a..b8bb110f5 100644
--- a/backend/open_webui/routers/images.py
+++ b/backend/open_webui/routers/images.py
@@ -623,7 +623,7 @@ async def image_generations(
or request.app.state.config.IMAGE_GENERATION_ENGINE == ""
):
if form_data.model:
- set_image_model(form_data.model)
+ set_image_model(request, form_data.model)
data = {
"prompt": form_data.prompt,
diff --git a/backend/open_webui/routers/users.py b/backend/open_webui/routers/users.py
index 50014a5f6..a2bfbf665 100644
--- a/backend/open_webui/routers/users.py
+++ b/backend/open_webui/routers/users.py
@@ -21,7 +21,7 @@ from fastapi import APIRouter, Depends, HTTPException, Request, status
from pydantic import BaseModel
from open_webui.utils.auth import get_admin_user, get_password_hash, get_verified_user
-from open_webui.utils.access_control import get_permissions
+from open_webui.utils.access_control import get_permissions, has_permission
log = logging.getLogger(__name__)
@@ -205,9 +205,22 @@ async def get_user_settings_by_session_user(user=Depends(get_verified_user)):
@router.post("/user/settings/update", response_model=UserSettings)
async def update_user_settings_by_session_user(
- form_data: UserSettings, user=Depends(get_verified_user)
+ request: Request, form_data: UserSettings, user=Depends(get_verified_user)
):
- user = Users.update_user_settings_by_id(user.id, form_data.model_dump())
+ updated_user_settings = form_data.model_dump()
+ if (
+ user.role != "admin"
+ and "toolServers" in updated_user_settings.get("ui").keys()
+ and not has_permission(
+ user.id,
+ "features.direct_tool_servers",
+ request.app.state.config.USER_PERMISSIONS,
+ )
+ ):
+ # If the user is not an admin and does not have permission to use tool servers, remove the key
+ updated_user_settings["ui"].pop("toolServers", None)
+
+ user = Users.update_user_settings_by_id(user.id, updated_user_settings)
if user:
return user.settings
else:
diff --git a/backend/open_webui/utils/middleware.py b/backend/open_webui/utils/middleware.py
index 11c07cc1b..2ab73292c 100644
--- a/backend/open_webui/utils/middleware.py
+++ b/backend/open_webui/utils/middleware.py
@@ -672,6 +672,9 @@ def apply_params_to_form_data(form_data, model):
if "frequency_penalty" in params and params["frequency_penalty"] is not None:
form_data["frequency_penalty"] = params["frequency_penalty"]
+ if "presence_penalty" in params and params["presence_penalty"] is not None:
+ form_data["presence_penalty"] = params["presence_penalty"]
+
if "reasoning_effort" in params and params["reasoning_effort"] is not None:
form_data["reasoning_effort"] = params["reasoning_effort"]
@@ -1430,11 +1433,12 @@ async def process_chat_response(
if after_tag:
content_blocks[-1]["content"] = after_tag
+ tag_content_handler(
+ content_type, tags, after_tag, content_blocks
+ )
- content = after_tag
break
-
- if content and content_blocks[-1]["type"] == content_type:
+ elif content_blocks[-1]["type"] == content_type:
start_tag = content_blocks[-1]["start_tag"]
end_tag = content_blocks[-1]["end_tag"]
# Match end tag e.g.,
diff --git a/backend/open_webui/utils/oauth.py b/backend/open_webui/utils/oauth.py
index 2ad134ff7..efb287dbf 100644
--- a/backend/open_webui/utils/oauth.py
+++ b/backend/open_webui/utils/oauth.py
@@ -158,7 +158,7 @@ class OAuthManager:
nested_claims = oauth_claim.split(".")
for nested_claim in nested_claims:
claim_data = claim_data.get(nested_claim, {})
-
+
if isinstance(claim_data, list):
user_oauth_groups = claim_data
elif isinstance(claim_data, str):
diff --git a/backend/open_webui/utils/payload.py b/backend/open_webui/utils/payload.py
index 5f8aafb78..d43dfd789 100644
--- a/backend/open_webui/utils/payload.py
+++ b/backend/open_webui/utils/payload.py
@@ -59,6 +59,7 @@ def apply_model_params_to_body_openai(params: dict, form_data: dict) -> dict:
"top_p": float,
"max_tokens": int,
"frequency_penalty": float,
+ "presence_penalty": float,
"reasoning_effort": str,
"seed": lambda x: x,
"stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x],
diff --git a/contribution_stats.py b/contribution_stats.py
new file mode 100644
index 000000000..3caa4738e
--- /dev/null
+++ b/contribution_stats.py
@@ -0,0 +1,74 @@
+import os
+import subprocess
+from collections import Counter
+
+CONFIG_FILE_EXTENSIONS = (".json", ".yml", ".yaml", ".ini", ".conf", ".toml")
+
+
+def is_text_file(filepath):
+ # Check for binary file by scanning for null bytes.
+ try:
+ with open(filepath, "rb") as f:
+ chunk = f.read(4096)
+ if b"\0" in chunk:
+ return False
+ return True
+ except Exception:
+ return False
+
+
+def should_skip_file(path):
+ base = os.path.basename(path)
+ # Skip dotfiles and dotdirs
+ if base.startswith("."):
+ return True
+ # Skip config files by extension
+ if base.lower().endswith(CONFIG_FILE_EXTENSIONS):
+ return True
+ return False
+
+
+def get_tracked_files():
+ try:
+ output = subprocess.check_output(["git", "ls-files"], text=True)
+ files = output.strip().split("\n")
+ files = [f for f in files if f and os.path.isfile(f)]
+ return files
+ except subprocess.CalledProcessError:
+ print("Error: Are you in a git repository?")
+ return []
+
+
+def main():
+ files = get_tracked_files()
+ email_counter = Counter()
+ total_lines = 0
+
+ for file in files:
+ if should_skip_file(file):
+ continue
+ if not is_text_file(file):
+ continue
+ try:
+ blame = subprocess.check_output(
+ ["git", "blame", "-e", file], text=True, errors="replace"
+ )
+ for line in blame.splitlines():
+ # The email always inside <>
+ if "<" in line and ">" in line:
+ try:
+ email = line.split("<")[1].split(">")[0].strip()
+ except Exception:
+ continue
+ email_counter[email] += 1
+ total_lines += 1
+ except subprocess.CalledProcessError:
+ continue
+
+ for email, lines in email_counter.most_common():
+ percent = (lines / total_lines * 100) if total_lines else 0
+ print(f"{email}: {lines}/{total_lines} {percent:.2f}%")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/package-lock.json b/package-lock.json
index b5145a90b..4ebe758d1 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -1,12 +1,12 @@
{
"name": "open-webui",
- "version": "0.6.6",
+ "version": "0.6.7",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "open-webui",
- "version": "0.6.6",
+ "version": "0.6.7",
"dependencies": {
"@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2",
diff --git a/package.json b/package.json
index ce1cb2545..efd0f2cf3 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "open-webui",
- "version": "0.6.6",
+ "version": "0.6.7",
"private": true,
"scripts": {
"dev": "npm run pyodide:fetch && vite dev --host",
diff --git a/src/lib/components/admin/Settings/Audio.svelte b/src/lib/components/admin/Settings/Audio.svelte
index 070ed9b69..960f3497a 100644
--- a/src/lib/components/admin/Settings/Audio.svelte
+++ b/src/lib/components/admin/Settings/Audio.svelte
@@ -32,6 +32,7 @@
let TTS_VOICE = '';
let TTS_SPLIT_ON: TTS_RESPONSE_SPLIT = TTS_RESPONSE_SPLIT.PUNCTUATION;
let TTS_AZURE_SPEECH_REGION = '';
+ let TTS_AZURE_SPEECH_BASE_URL = '';
let TTS_AZURE_SPEECH_OUTPUT_FORMAT = '';
let STT_OPENAI_API_BASE_URL = '';
@@ -105,6 +106,7 @@
VOICE: TTS_VOICE,
SPLIT_ON: TTS_SPLIT_ON,
AZURE_SPEECH_REGION: TTS_AZURE_SPEECH_REGION,
+ AZURE_SPEECH_BASE_URL: TTS_AZURE_SPEECH_BASE_URL,
AZURE_SPEECH_OUTPUT_FORMAT: TTS_AZURE_SPEECH_OUTPUT_FORMAT
},
stt: {
@@ -149,8 +151,9 @@
TTS_SPLIT_ON = res.tts.SPLIT_ON || TTS_RESPONSE_SPLIT.PUNCTUATION;
- TTS_AZURE_SPEECH_OUTPUT_FORMAT = res.tts.AZURE_SPEECH_OUTPUT_FORMAT;
TTS_AZURE_SPEECH_REGION = res.tts.AZURE_SPEECH_REGION;
+ TTS_AZURE_SPEECH_BASE_URL = res.tts.AZURE_SPEECH_BASE_URL;
+ TTS_AZURE_SPEECH_OUTPUT_FORMAT = res.tts.AZURE_SPEECH_OUTPUT_FORMAT;
STT_OPENAI_API_BASE_URL = res.stt.OPENAI_API_BASE_URL;
STT_OPENAI_API_KEY = res.stt.OPENAI_API_KEY;
@@ -272,16 +275,23 @@
bind:value={STT_AZURE_API_KEY}
required
/>
-
+
+
{$i18n.t('Azure Region')}
+
+
+
+
+
+
+
{$i18n.t('Language Locales')}
@@ -296,13 +306,13 @@
-
{$i18n.t('Base URL')}
+
{$i18n.t('Endpoint URL')}
@@ -468,18 +478,35 @@
{:else if TTS_ENGINE === 'azure'}