### What problem does this PR solve?
The `/api/v1/chats` API endpoint was broken, any GET request got the
following response:
```
{"code":100,"data":null,"message":"TypeError(\"'int' object is not callable\")"}
```
With this log ragflow-server side:
```
2025-03-07 14:36:26,297 ERROR 20 'int' object is not callable
Traceback (most recent call last):
File "/ragflow/.venv/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/ragflow/.venv/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/ragflow/api/utils/api_utils.py", line 303, in decorated_function
return func(*args, **kwargs)
File "/ragflow/api/apps/sdk/chat.py", line 323, in list_chat
logging.WARN(f"Don't exist the kb {kb_id}")
TypeError: 'int' object is not callable
2025-03-07 14:36:26,298 INFO 20 172.18.0.6 - - [07/Mar/2025 14:36:26] "GET /api/v1/chats HTTP/1.1" 200 -
```
This was caused by the incorrect use of `logging.WARN` as a method (it's
a loglevel object), instead of the correct `logging.warning()` method.
This PR fixes that, and also rewrites the message to be grammaticaly
correct.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
fix:when start with source code not in docker env report
"UnicodeDecodeError: 'gbk' codec can't decode byte 0xad in position 5:
illegal multibyte sequence" in windows
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
Co-authored-by: tangyu <1@1.com>
### What problem does this PR solve?
Fixed the issue of "stop deleting when encountering invalid dataset ID"
#5760
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
1. **Issue**: When calling `list_agent_session` via the HTTP API, users
may only need to display conversation messages, and do not want to see
the associated dsl, which can be very large. Therefore, consider adding
a control option to determine whether the DSL should be returned, with
the default being to return it.
2. **Documentation Discrepancy**: In the HTTP API documentation, under
"List agent sessions," the "Response" section states that the "data"
field is a dictionary when "success" is returned. However, the actual
returned data is a list. This discrepancy has been corrected.
The `dialog_id` field was inconsistently defined:
- In the `migrate_db()` function, it was set to `null=True`.
- In the model class, it was defined as `null=False`.
This inconsistency caused an issue during the initial deployment where
the database table did not allow `dialog_id` to be null. As a result,
calling `APITokenService.save(**obj)` in `system_app.py` raised the
following error:
```
peewee.IntegrityError: null value in column "dialog_id" violates not-null constraint
```
### What problem does this PR solve?
Error: peewee.IntegrityError: null value in column "dialog_id" violates
not-null constraint
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Refactored DocumentService.update_progress
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Fix the issue where, when getting a user's APIToken, if the user is part
of another user's team, it incorrectly gets the Team owner's APIToken
instead.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Fix may lose part of information of last stream chunck
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Add sessions deletion support for agent in http and python api
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### What problem does this PR solve?
This pull request aims to fix a bug that prevents certain email
addresses from signing up. The affected TLDs were returning 'invalid
email address' errors:
.museum
.software
.photography
.technology
.marketing
.education
.international
.community
.construction
.government
.consulting
....
### Type of change
- [X] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
close#5277 by make sure the file close
### Type of change
- [x] Performance Improvement
---------
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
### What problem does this PR solve?
This patch add signal for ctrl + c that can exit the code friendly
cause code base use thread daemon can not exit friendly for being
started.
how to reproduce
1. docker-compose -f docker/docker-compose-base.yml up
2. other window `bash docker/launch_backend_service.sh`
3. stop 1 first
4. try to stop 2 then two thread can not exit which must use `kill pid`
This patch fix it
and should fix most the related issues in the `issues`
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
---------
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Fixed OpenAI compatibility stream [DONE]
- [x] Bug Fix (non-breaking change which fixes an issue)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### What problem does this PR solve?
Add OpenAI-compatible http and python api reference
### Type of change
- [x] Documentation Update
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
### What problem does this PR solve?
fix this bug: https://github.com/infiniflow/ragflow/issues/5368
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
Co-authored-by: wenju.li <wenju.li@deepctr.cn>
### What problem does this PR solve?
Added OpenAI-like completion api, related to #4672, #4705
This function allows users to interact with a model to get responses
based on a series of messages.
If `stream` is set to True, the response will be streamed in chunks,
mimicking the OpenAI-style API.
#### Example usage:
```bash
curl -X POST https://ragflow_address.com/api/v1/chats_openai/<chat_id>/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $RAGFLOW_API_KEY" \
-d '{
"model": "model",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"stream": true
}'
```
Alternatively, you can use Python's `OpenAI` client:
```python
from openai import OpenAI
model = "model"
client = OpenAI(api_key="ragflow-api-key", base_url=f"http://ragflow_address/api/v1/chats_openai/<chat_id>")
completion = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who you are?"},
{"role": "assistant", "content": "I am an AI assistant named..."},
{"role": "user", "content": "Can you tell me how to install neovim"},
],
stream=True
)
stream = True
if stream:
for chunk in completion:
print(chunk)
else:
print(completion.choices[0].message.content)
```
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### Related Issues
Related to #4672, #4705
### What problem does this PR solve?
As issue #3268 mentioned, "Chun not found!" exception will occur,
especially during the teamwork of knowledge bases.
### The reason of this bug
"tenants" are the people on current_user's team, including the team
owner itself. The old one only checks the first "tenant", tenants[0],
which will cause error when anyone editing the chunk that is not in
tenants[0]'s knowledge base.
My modification won't introduce new errors while iterate all the tenant
then retrieve knowledge bases of each.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
This pull request includes changes to the `api/settings.py` and
`docker/service_conf.yaml.template` files to add support for default
models in the LLM configuration (specially for LIGHTEN builds). The most
important changes include adding default model configurations and
updating the initialization settings to use these defaults.
For example:
With this configuration Bedrock will be enable by default with claude
and titan embeddings.
```
user_default_llm:
factory: 'Bedrock'
api_key: '{}'
base_url: ''
default_models:
chat_model: 'anthropic.claude-3-5-sonnet-20240620-v1:0'
embedding_model: 'amazon.titan-embed-text-v2:0'
rerank_model: ''
asr_model: ''
image2text_model: ''
```
### Type of change
- [X] New Feature (non-breaking change which adds functionality)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
This PR supports downloading models from ModelScope. The main
modifications are as follows:
-New Feature (non-breaking change which adds functionality)
-Documentation Update
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>