AI chatbot for Matrix with infinite personalties, using ollama
Go to file
2024-04-26 23:05:46 -04:00
config.json bug fix 2024-04-26 23:05:46 -04:00
help.txt improvements. added config.json, help.txt, etc 2024-04-26 22:39:19 -04:00
LICENSE Initial commit 2023-12-15 00:55:33 -05:00
ollamarama.py minor fixes 2024-04-26 22:43:09 -04:00
README.md improvements. added config.json, help.txt, etc 2024-04-26 22:39:19 -04:00

ollamarama-matrix

Ollamarama is an AI chatbot for the Matrix chat protocol using LiteLLM and Ollama. It can roleplay as almost anything you can think of. You can set any default personality you would like. It can be changed at any time, and each user has their own separate chat history with their chosen personality setting. Users can interact with each others chat histories for collaboration if they would like, but otherwise, conversations are separated, per channel, per user.

This is based on my earlier project, infinigpt-matrix, which uses OpenAI and costs money to use. (Now updated with OpenAI/Ollama model switching)

IRC version available at ollamarama-irc

Terminal-based version at ollamarama

Setup

Install and familiarize yourself with Ollama, make sure you can run local LLMs, etc.

You can install and update it with this command:

curl https://ollama.ai/install.sh | sh

Once it's all set up, you'll need to download the models you want to use. You can play with the available ones and see what works best for you. Add those to the config.json file. If you want to use the ones I've included, just run ollama pull modelname for each. You can skip this part, and they should download when the model is switched, but the response will be delayed until it finishes downloading.

You'll also need to install matrix-nio and litellm

pip3 install matrix-nio litellm

Set up a Matrix account for your bot. You'll need the server, username and password.

Plug those into the appropriate variables in the config.json file.

python3 ollamarama.py

Use

.ai message or botname: message Basic usage. Personality is preset by bot operator.

.x user message This allows you to talk to another user's chat history. user is the display name of the user whose history you want to use

.persona personality Changes the personality. It can be a character, personality type, object, idea. Don't use a custom system prompt here.

.custom prompt Allows use of a custom system prompt instead of the roleplaying prompt

.reset Reset to preset personality

.stock Remove personality and reset to standard settings

.help Show the built-in help menu

.models Show current model and available models (admin only)

.model name Set a model (admin only)

.model reset Reset to default model (admin only)

.temperature Set temperature value between 0 and 1. To reset to default, type reset instead of a number. (bot owner only)

.top_p Set top_p value between 0 and 1. To reset to default, type reset instead of a number. (bot owner only)

.repeat_penalty Set repeat_penalty between 0 and 2. To reset to default, type reset instead of a number. (bot owner only)

.clear Resets all bot history and sets default model (bot owner only)

.auth user Add user to admins (bot owner only)

.deauth user Remove user from admins (bot owner only)

.gpersona persona Change global personality (bot owner only)

.gpersona reset Reset global personality (bot owner only)

Tips

To get a longer response, you can tell the bot to "ignore the brackets after this message".

When using a coding LLM, remove the personality with the stock command, or set an appropriate personality, such as a python expert.

I have not extensively tested the models included in the json, add and remove models as you see fit. They each have their strengths and weaknesses. I am using the default 4-bit quant versions for simplicity, but the quality will be pretty poor. To use a different size model than default, use the full name, for example llama2:7b-chat-q8_0 instead of just llama2.