Fix the value issue of anthropic (#3351)

### What problem does this PR solve?

This pull request fixes the issue mentioned in
https://github.com/infiniflow/ragflow/issues/3263.

1. response should be parsed as dict, prevent the following code from
failing to take values:
ans = response["content"][0]["text"]
2. API Model ```claude-instant-1.2``` has retired (by
[model-deprecations](https://docs.anthropic.com/en/docs/resources/model-deprecations)),
it will trigger errors in the code, so I deleted it from the
conf/llm_factories.json file and updated the latest API Model
```claude-3-5-sonnet-20241022```



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chenhaodong <chenhaodong@ctrlvideo.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
This commit is contained in:
shijiefengjun 2024-11-13 16:13:52 +08:00 committed by GitHub
parent ccf189cb7f
commit 632b23486f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 3 additions and 3 deletions

View File

@ -2371,8 +2371,8 @@
"model_type": "chat"
},
{
"llm_name": "claude-instant-1.2",
"tags": "LLM,CHAT,100k",
"llm_name": "claude-3-5-sonnet-20241022",
"tags": "LLM,CHAT,200k",
"max_tokens": 102400,
"model_type": "chat"
}

View File

@ -1260,7 +1260,7 @@ class AnthropicChat(Base):
system=self.system,
stream=False,
**gen_conf,
).json()
).to_dict()
ans = response["content"][0]["text"]
if response["stop_reason"] == "max_tokens":
ans += (