From fe4b2e46708953b92f945b70fb4828d4fc6ee9c4 Mon Sep 17 00:00:00 2001 From: writinwaters <93570324+writinwaters@users.noreply.github.com> Date: Tue, 21 May 2024 16:43:58 +0800 Subject: [PATCH] Updated Launch service from source (#856) ### What problem does this PR solve? Some nitpicking editorial updates. ### Type of change - [x] Documentation Update --- README.md | 151 +++++++++++++++++++++++++++++------------------------- 1 file changed, 81 insertions(+), 70 deletions(-) diff --git a/README.md b/README.md index 1de7b1843..98c9679bb 100644 --- a/README.md +++ b/README.md @@ -191,88 +191,99 @@ $ chmod +x ./entrypoint.sh $ docker compose up -d ``` -## 🛠️ Launch Service from Source +## 🛠️ Launch service from source -To launch the service from source, please follow these steps: +To launch the service from source: -1. Clone the repository -```bash -$ git clone https://github.com/infiniflow/ragflow.git -$ cd ragflow/ -``` +1. Clone the repository: -2. Create a virtual environment (ensure Anaconda or Miniconda is installed) -```bash -$ conda create -n ragflow python=3.11.0 -$ conda activate ragflow -$ pip install -r requirements.txt -``` -If CUDA version is greater than 12.0, execute the following additional commands: -```bash -$ pip uninstall -y onnxruntime-gpu -$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ -``` + ```bash + $ git clone https://github.com/infiniflow/ragflow.git + $ cd ragflow/ + ``` -3. Copy the entry script and configure environment variables -```bash -$ cp docker/entrypoint.sh . -$ vi entrypoint.sh -``` -Use the following commands to obtain the Python path and the ragflow project path: -```bash -$ which python -$ pwd -``` +2. Create a virtual environment, ensuring that Anaconda or Miniconda is installed: -Set the output of `which python` as the value for `PY` and the output of `pwd` as the value for `PYTHONPATH`. + ```bash + $ conda create -n ragflow python=3.11.0 + $ conda activate ragflow + $ pip install -r requirements.txt + ``` + + ```bash + # If your CUDA version is higher than 12.0, run the following additional commands: + $ pip uninstall -y onnxruntime-gpu + $ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ + ``` -If `LD_LIBRARY_PATH` is already configured, it can be commented out. +3. Copy the entry script and configure environment variables: -```bash -# Adjust configurations according to your actual situation; the two export commands are newly added. -PY=${PY} -export PYTHONPATH=${PYTHONPATH} -# Optional: Add Hugging Face mirror -export HF_ENDPOINT=https://hf-mirror.com -``` + ```bash + # Get the Python path: + $ which python + # Get the ragflow project path: + $ pwd + ``` + + ```bash + $ cp docker/entrypoint.sh . + $ vi entrypoint.sh + ``` -4. Start the base services -```bash -$ cd docker -$ docker compose -f docker-compose-base.yml up -d -``` + ```bash + # Adjust configurations according to your actual situation (the following two export commands are newly added): + # - Assign the result of `which python` to `PY`. + # - Assign the result of `pwd` to `PYTHONPATH`. + # - Comment out `LD_LIBRARY_PATH`, if it is configured. + # - Optional: Add Hugging Face mirror. + PY=${PY} + export PYTHONPATH=${PYTHONPATH} + export HF_ENDPOINT=https://hf-mirror.com + ``` -5. Check the configuration files -Ensure that the settings in **docker/.env** match those in **conf/service_conf.yaml**. The IP addresses and ports for related services in **service_conf.yaml** should be changed to the local machine IP and ports exposed by the container. +4. Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL): -6. Launch the service -```bash -$ chmod +x ./entrypoint.sh -$ bash ./entrypoint.sh -``` + ```bash + $ cd docker + $ docker compose -f docker-compose-base.yml up -d + ``` -7. Start the WebUI service -```bash -$ cd web -$ npm install --registry=https://registry.npmmirror.com --force -$ vim .umirc.ts -# Modify proxy.target to 127.0.0.1:9380 -$ npm run dev -``` +5. Check the configuration files, ensuring that: -8. Deploy the WebUI service -```bash -$ cd web -$ npm install --registry=https://registry.npmmirror.com --force -$ umi build -$ mkdir -p /ragflow/web -$ cp -r dist /ragflow/web -$ apt install nginx -y -$ cp ../docker/nginx/proxy.conf /etc/nginx -$ cp ../docker/nginx/nginx.conf /etc/nginx -$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d -$ systemctl start nginx -``` + - The settings in **docker/.env** match those in **conf/service_conf.yaml**. + - The IP addresses and ports for related services in **service_conf.yaml** match the local machine IP and ports exposed by the container. + +6. Launch the RAGFlow backend service: + + ```bash + $ chmod +x ./entrypoint.sh + $ bash ./entrypoint.sh + ``` + +7. Launch the frontend service: + + ```bash + $ cd web + $ npm install --registry=https://registry.npmmirror.com --force + $ vim .umirc.ts + # Update proxy.target to 127.0.0.1:9380 + $ npm run dev + ``` + +8. Deploy the frontend service: + + ```bash + $ cd web + $ npm install --registry=https://registry.npmmirror.com --force + $ umi build + $ mkdir -p /ragflow/web + $ cp -r dist /ragflow/web + $ apt install nginx -y + $ cp ../docker/nginx/proxy.conf /etc/nginx + $ cp ../docker/nginx/nginx.conf /etc/nginx + $ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d + $ systemctl start nginx + ``` ## 📚 Documentation