## Summary
Fixed grammar errors and improved clarity in prompt templates throughout
`rag/prompts.py`.
## Changes Made
- **Fixed incomplete sentence**: `"If the user's latest question is
completely, don't do anything"` → `"If the user's latest question is
already complete, don't do anything"`
- **Improved phrasing**: `"of like [ID:i]"` → `"such as [ID:i]"`
- **Added missing articles**: `"give top 3"` → `"give the top 3"`
- **Fixed prepositions**: `"in language of"` → `"in the same language
as"`
- **Corrected spelling**: `"Jappanese"` → `"Japanese"`
- **Standardized formatting**: Consistent role descriptions and
punctuation
## Impact
These changes improve prompt readability and should make instructions
clearer for the underlying language models.
## Test Plan
- [x] Verified changes maintain original prompt functionality
- [x] No breaking changes to prompt structure or expected outputs
Co-authored-by: Adrian Altermatt <adrian.altermatt@fgcz.uzh.ch>
### What problem does this PR solve?
Change citation mark as [ID:n], it's easier for LLMs to follow the
instruction :) #7904
### Type of change
- [x] Refactoring
### What problem does this PR solve?
Some models force thinking, resulting in the absence of the think tag in
the returned content
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Add fallback for bad citation output. #6948
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- Returning 3 similarity scores to the chat completion's `reference`
field. It gives the user more transparency and added flexibility to
display/rerank the reference when needed
Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
### What problem does this PR solve?
Add VLM-boosted PDF parser if VLM is set.
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
Add vision LLM PDF parser
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
…ions
### What problem does this PR solve?
This PR fixes an issue where the application was repeatedly reading the
llm_factories.json file from disk in multiple places, which could lead
to "Too many open files" errors under high load conditions. The fix
centralizes the file reading operation in the settings.py module and
stores the data in a global variable that can be accessed by other
modules.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [x] Performance Improvement
- [ ] Other (please describe):