From d5b8d8e647236ac26b9acde4a60ffc53507eb9aa Mon Sep 17 00:00:00 2001 From: writinwaters <93570324+writinwaters@users.noreply.github.com> Date: Wed, 22 May 2024 12:45:34 +0800 Subject: [PATCH] fixed a format issue for docusaurus publication (#871) ### What problem does this PR solve? _Briefly describe what this PR aims to solve. Include background context that will help reviewers understand the purpose of the PR._ ### Type of change - [x] Documentation Update --- docs/guides/deploy_local_llm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guides/deploy_local_llm.md b/docs/guides/deploy_local_llm.md index 0a14bb9b3..f61437b60 100644 --- a/docs/guides/deploy_local_llm.md +++ b/docs/guides/deploy_local_llm.md @@ -56,7 +56,7 @@ $ xinference-local --host 0.0.0.0 --port 9997 ### Launch Xinference Decide which LLM you want to deploy ([here's a list for supported LLM](https://inference.readthedocs.io/en/latest/models/builtin/)), say, **mistral**. -Execute the following command to launch the model, remember to replace ${quantization} with your chosen quantization method from the options listed above: +Execute the following command to launch the model, remember to replace `${quantization}` with your chosen quantization method from the options listed above: ```bash $ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization} ```