reverted quantization change, seemed to cause unwanted behaviors
This commit is contained in:
parent
da2763dee4
commit
7b901310bd
@ -14,9 +14,9 @@ You can install it with this command:
|
|||||||
curl https://ollama.ai/install.sh | sh
|
curl https://ollama.ai/install.sh | sh
|
||||||
```
|
```
|
||||||
|
|
||||||
Once it's all set up, you'll need to download the model. You can play with the available ones and see what works best for you, but for this bot, zephyr:7b-beta-q6_K seems to work best of the ones I've tested. To install:
|
Once it's all set up, you'll need to download the model. You can play with the available ones and see what works best for you, but for this bot, zephyr:7b-beta-q8_0 seems to work best of the ones I've tested. To install:
|
||||||
```
|
```
|
||||||
ollama pull zephyr:7b-beta-q6_K
|
ollama pull zephyr:7b-beta-q8_0
|
||||||
```
|
```
|
||||||
|
|
||||||
You'll also need to install matrix-nio and litellm
|
You'll also need to install matrix-nio and litellm
|
||||||
|
@ -30,7 +30,7 @@ class ollamarama:
|
|||||||
self.prompt = ("you are ", ". speak in the first person and never break character.")
|
self.prompt = ("you are ", ". speak in the first person and never break character.")
|
||||||
|
|
||||||
#set model, this one works best in my tests with the hardware i have, but you can try others
|
#set model, this one works best in my tests with the hardware i have, but you can try others
|
||||||
self.model = "ollama/zephyr:7b-beta-q6_K"
|
self.model = "ollama/zephyr:7b-beta-q8_0"
|
||||||
|
|
||||||
|
|
||||||
# get the display name for a user
|
# get the display name for a user
|
||||||
|
Loading…
Reference in New Issue
Block a user