mirror of
https://git.mirrors.martin98.com/https://github.com/infiniflow/ragflow.git
synced 2025-06-02 04:59:01 +08:00

### What problem does this PR solve? Issue link:#221 ### Type of change - [x] Documentation Update
1.4 KiB
1.4 KiB
Ollama
One-click deployment of local LLMs, that is Ollama.
Install
Launch Ollama
Decide which LLM you want to deploy (here's a list for supported LLM), say, mistral:
$ ollama run mistral
Or,
$ docker exec -it ollama ollama run mistral
Use Ollama in RAGFlow
- Go to 'Settings > Model Providers > Models to be added > Ollama'.
Base URL: Enter the base URL where the Ollama service is accessible, like, http://:11434
- Use Ollama Models.