ragflow/docker/launch_backend_service.sh
Sanan bin Tahir 62a9afd382
Change launch backend script to handle errors gracefully (#3334)
### What problem does this PR solve?

The `launch_backend_service.sh` script enters infinite loops for both
the task executors and the backend server. When an error occurs in any
of these processes, the script continuously restarts them without
properly handling termination signals. This behavior causes the script
to even ignore interrupts, leading to persistent error messages and
making it difficult to exit the script gracefully.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

### Explanation of Modifications

1. **Signal Trapping with `trap`:** 
- The `trap cleanup SIGINT SIGTERM` line ensures that when a `SIGINT` or
`SIGTERM` signal is received, the cleanup function is invoked.
- The `cleanup` function sets the `STOP` flag to `true`, iterates
through all child process IDs stored in the `PIDS` array, and sends a
`kill` signal to each process to terminate them gracefully.
2. **Retry Limits:**
- Introduced a `MAX_RETRIES` variable to limit the number of restart
attempts for both `task_executor.py` and `ragflow_server.py`
- The loops now check if the retry count has reached the maximum limit.
If so, they invoke the `cleanup` function to terminate all processes and
exit the script.
3. **Process Tracking with `PIDS` Array:**
- After launching each background process (`task_exe` and `run_server`),
their Process IDs (PIDs) are stored in the `PIDS` array.
- This allows the `cleanup` function to terminate all child processes
effectively when needed.
4. **Graceful Shutdown:**
- When the `cleanup` function is called, it iterates over all child PIDs
and sends a termination signal (`kill`) to each, ensuring that all
subprocesses are stopped before the script exits.
5. **Logging Enhancements:**
- Added `echo` statements to provide clearer logs about the state of
each process, including attempts, successes, failures, and retries.
6. **Exit on Successful Completion:**
- If `ragflow_server.py` or a `task_executor.py` process exits with a
success code (0), the loop breaks, preventing unnecessary retries.

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-12 15:51:38 +08:00

103 lines
2.7 KiB
Bash

#!/bin/bash
# Exit immediately if a command exits with a non-zero status
set -e
# Unset HTTP proxies that might be set by Docker daemon
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/
PY=python3
# Set default number of workers if WS is not set or less than 1
if [[ -z "$WS" || $WS -lt 1 ]]; then
WS=1
fi
# Maximum number of retries for each task executor and server
MAX_RETRIES=5
# Flag to control termination
STOP=false
# Array to keep track of child PIDs
PIDS=()
# Function to handle termination signals
cleanup() {
echo "Termination signal received. Shutting down..."
STOP=true
# Terminate all child processes
for pid in "${PIDS[@]}"; do
if kill -0 "$pid" 2>/dev/null; then
echo "Killing process $pid"
kill "$pid"
fi
done
exit 0
}
# Trap SIGINT and SIGTERM to invoke cleanup
trap cleanup SIGINT SIGTERM
# Function to execute task_executor with retry logic
task_exe(){
local task_id=$1
local retry_count=0
while ! $STOP && [ $retry_count -lt $MAX_RETRIES ]; do
echo "Starting task_executor.py for task $task_id (Attempt $((retry_count+1)))"
$PY rag/svr/task_executor.py "$task_id"
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
echo "task_executor.py for task $task_id exited successfully."
break
else
echo "task_executor.py for task $task_id failed with exit code $EXIT_CODE. Retrying..." >&2
retry_count=$((retry_count + 1))
sleep 2
fi
done
if [ $retry_count -ge $MAX_RETRIES ]; then
echo "task_executor.py for task $task_id failed after $MAX_RETRIES attempts. Exiting..." >&2
cleanup
fi
}
# Function to execute ragflow_server with retry logic
run_server(){
local retry_count=0
while ! $STOP && [ $retry_count -lt $MAX_RETRIES ]; do
echo "Starting ragflow_server.py (Attempt $((retry_count+1)))"
$PY api/ragflow_server.py
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
echo "ragflow_server.py exited successfully."
break
else
echo "ragflow_server.py failed with exit code $EXIT_CODE. Retrying..." >&2
retry_count=$((retry_count + 1))
sleep 2
fi
done
if [ $retry_count -ge $MAX_RETRIES ]; then
echo "ragflow_server.py failed after $MAX_RETRIES attempts. Exiting..." >&2
cleanup
fi
}
# Start task executors
for ((i=0;i<WS;i++))
do
task_exe "$i" &
PIDS+=($!)
done
# Start the main server
run_server &
PIDS+=($!)
# Wait for all background processes to finish
wait