From b467702b87461543c75013207e9adc6d20dcc01d Mon Sep 17 00:00:00 2001 From: Georgi Gerganov Date: Wed, 29 Mar 2023 19:38:31 +0300 Subject: [PATCH] readme : fix typos --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index c2323f4..e30452e 100644 --- a/README.md +++ b/README.md @@ -229,13 +229,15 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach. ### Using [GPT4All](https://github.com/nomic-ai/gpt4all) - Obtain the `gpt4all-lora-quantized.bin` model -- It is distributed in the old `ggml` format which is not obsoleted. So you have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py): +- It is distributed in the old `ggml` format which is now obsoleted +- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py): ```bash python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model ``` -- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models. The original model is stored in the same folder with a suffix `.orig` +- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models +- The original model is saved in the same folder with a suffix `.orig` ### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data