Compare commits

...

21 Commits
dev ... main

Author SHA1 Message Date
scriptos
2d5b48bc00 doku 2024-11-27 23:30:45 +01:00
scriptos
9b230e3ef3 i hate bugs !!! 2024-11-27 23:18:52 +01:00
scriptos
1340cd2c3d kleine änderungen 2024-11-27 23:06:49 +01:00
464548f338 Merge pull request 'docker' (#1) from docker into main
Reviewed-on: scriptos/ollamarama-matrix#1
2024-11-27 21:55:27 +00:00
scriptos
3c694185d6 some bug fixes 2024-11-27 22:45:27 +01:00
scriptos
5ee7bbea20 Further bug fixes 2024-11-27 21:53:36 +01:00
scriptos
c64ab6a99f First tests with the Docker containers 2024-11-27 21:38:00 +01:00
scriptos
6e0770833e Preparations for Docker image created 2024-11-27 20:24:10 +01:00
Dustin
a7188214e9
Merge pull request #30 from h1ddenpr0cess20/dev
bug fix
2024-09-17 11:40:35 -04:00
Dustin
8afe06fa09
Merge pull request #29 from h1ddenpr0cess20/dev
removed unnecessary if statements
2024-08-25 21:28:07 -04:00
Dustin
361827815c
Merge pull request #28 from h1ddenpr0cess20/dev
bug fix
2024-08-24 19:38:11 -04:00
Dustin
e15cb743f4
Merge pull request #27 from h1ddenpr0cess20/dev
bug fixes
2024-08-23 14:55:44 -04:00
Dustin
fcaa9836e7
Merge pull request #26 from h1ddenpr0cess20/dev
Dev
2024-08-22 02:00:23 -04:00
Dustin
b2cd4a128b
Merge pull request #25 from h1ddenpr0cess20/dev
bug fixes
2024-08-14 22:47:10 -04:00
Dustin
c327b3388a
Merge pull request #24 from h1ddenpr0cess20/dev
finally added markdown rendering
2024-08-14 19:45:30 -04:00
Dustin
ec35da7c16
Merge pull request #23 from h1ddenpr0cess20/dev
bug fix in in custom function
2024-08-12 14:03:11 -04:00
Dustin
ded50e5639
Merge pull request #22 from h1ddenpr0cess20/dev
bug fix on .clear function
2024-08-07 15:58:10 -04:00
Dustin
a65b6ab340
Merge pull request #20 from h1ddenpr0cess20/dev
bug fixes
2024-08-07 00:16:51 -04:00
Dustin
3cb9a01731
Merge pull request #19 from h1ddenpr0cess20/dev
improved config json
2024-08-02 23:25:05 -04:00
Dustin
ab2b6c802c
Merge pull request #18 from h1ddenpr0cess20/dev
cleanup
2024-07-30 22:06:29 -04:00
Dustin
6db4730cdd
Merge pull request #17 from h1ddenpr0cess20/dev
removed LiteLLM
2024-07-30 21:09:53 -04:00
9 changed files with 212 additions and 217 deletions

10
Dockerfile.chatbot Normal file
View File

@ -0,0 +1,10 @@
FROM python:3.10-slim
WORKDIR /app
COPY bin/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY bin/ollamarama.py .
CMD ["python", "ollamarama.py"]

7
Dockerfile.ollama Normal file
View File

@ -0,0 +1,7 @@
FROM ollama/ollama
COPY start.sh /start.sh
RUN chmod +x /start.sh
# Standardbefehl zum Starten des Servers
ENTRYPOINT ["/start.sh"]

116
README.md
View File

@ -1,84 +1,100 @@
# ollamarama-matrix
Ollamarama is an AI chatbot for the [Matrix](https://matrix.org/) chat protocol using Ollama. It can roleplay as almost anything you can think of. You can set any default personality you would like. It can be changed at any time, and each user has their own separate chat history with their chosen personality setting. Users can interact with each others chat histories for collaboration if they would like, but otherwise, conversations are separated, per channel, per user.
# ollamarama-matrix-chatbot
This is based on my earlier project, [infinigpt-matrix](https://github.com/h1ddenpr0cess20/infinigpt-matrix), which uses OpenAI and costs money to use. (Now updated with OpenAI/Ollama model switching)
### Zuerst das Wichtigste:
IRC version available at [ollamarama-irc](https://github.com/h1ddenpr0cess20/ollamarama-irc)
Dies ist ein Fork von [Dustin Whyte](https://github.com/h1ddenpr0cess20/ollamarama-matrix), welchen ich anschließend in Docker inpelementiert habe.
Ollamarama ist ein KI-Chatbot für das [Matrix](https://matrix.org/) Chatprotokoll mit Ollama. Er kann fast alles spielen, was Du dir vorstellen kannst. Du kannst jede Standardpersönlichkeit einstellen, die du möchtest. Sie kann jederzeit geändert werden, und jeder Benutzer hat seinen eigenen Chatverlauf mit der von ihm gewählten Persönlichkeitseinstellung. Die Benutzer können mit den Chatverläufen der anderen interagieren, um zusammenzuarbeiten, wenn sie das möchten, aber ansonsten sind die Unterhaltungen getrennt, pro Kanal und pro Benutzer.
Dieser Chatbot kommt zusammen mit dem Ollama Docker, zu finden [hier](https://hub.docker.com/r/ollama/ollama).
Terminal-based version at [ollamarama](https://github.com/h1ddenpr0cess20/ollamarama)
## Setup
Install and familiarize yourself with [Ollama](https://ollama.ai/), make sure you can run local LLMs, etc.
Installiere dir zuerst Docker. Dies kannst du mit [diesem Script](https://github.com/h1ddenpr0cess20/ollamarama-matrix) machen.
You can install and update it with this command:
Anschließend clonst du mein Projekt:
```bash
git clone https://git.techniverse.net/scriptos/ollamarama-matrix
```
curl https://ollama.ai/install.sh | sh
Anschließend ins Verzeichnis wechseln und die Konfigurationsdatei für den Matrix-Chatbot konfigurieren:
```bash
cd ollamarama-matrix && nano data/chatbot/config.json)
```
In der `config.json` werden die Zugangsdaten für den Chatbot gepflegt. Dieser muss im Vorfeld auf dem Matrix Server erstellt werden. Dies kann [hier](https://app.element.io/) gemacht werden.
Weiterhin können in dieser Konfigurationsdatei weitere Modelle gepflegt werden, welche vom Chatbot anschließend verwendet werden könnten.
In der Datei `start.sh` können weitere Modelle gepflegt werden, welche dann nach dem Starten vom Ollama Docker Container runtergeladen werden.
Weitere Modelle können [hier](https://ollama.ai/library) geladen werden.
Wenn die Konfiguration gemacht ist, kann der Docker mit
```bash
docker-compose up --build
```
gestartet werden.
Dies ist ideal um gleich auch die Logs zu sehen.
Nun tritt der Bot automatisch den konfigurierten Channels bei und sollte dir im Idealfall direkt zu Verfügung stehen.
Den Docker startest du richtig mit
```bash
docker-compose -d
```
Die erste Nachricht versendest du mit `.ai message`
Ein Beispiel:
```bash
.ai Hallo, wie geht es dir?
```
Once it's all set up, you'll need to [download the models](https://ollama.ai/library) you want to use. You can play with the available ones and see what works best for you. Add those to the config.json file. If you want to use the ones I've included, just run ollama pull _modelname_ for each.
# Verwendung
**.ai _nachricht_** oder **botname: _nachricht_**
You'll also need to install matrix-nio
```
pip3 install matrix-nio
```
Grundlegende Verwendung.
Set up a [Matrix account](https://app.element.io/) for your bot. You'll need the server, username and password.
**.x _benutzer_ _nachricht_**
Add those to the config.json file.
Erlaubt es dir, auf die Chat-Historie eines anderen Benutzers zuzugreifen.
```
python3 ollamarama.py
```
_benutzer_ ist der Anzeigename des Benutzers, dessen Historie du verwenden möchtest.
## Use
**.persona _persönlichkeit_**
Ändert die Persönlichkeit. Es kann eine Figur, ein Persönlichkeitstyp, ein Objekt, eine Idee, was auch immer sein. Nutze deine Fantasie.
**.ai _message_** or **botname: _message_**
**.custom _eingabeaufforderung_**
 Basic usage.
**.x _user_ _message_**
 This allows you to talk to another user's chat history.
 _user_ is the display name of the user whose history you want to use
**.persona _personality_**
 Changes the personality. It can be a character, personality type, object, idea, whatever. Use your imagination.
**.custom _prompt_**
 Allows use of a custom system prompt instead of the roleplaying prompt
Erlaubt die Verwendung einer benutzerdefinierten Systemaufforderung anstelle der Rollenspielaufforderung.
**.reset**
 Clear history and reset to preset personality
Verlauf löschen und auf voreingestellte Persönlichkeit zurücksetzen.
**.stock**
 Clear history and use without a system prompt
Verlauf löschen und ohne Systemaufforderung verwenden.
**Nur für Admins**
**Admin only commands**
**.model _modell_**
Lasse den Modellnamen weg, um das aktuelle Modell und verfügbare Modelle anzuzeigen.
**.model _model_**
 Omit model name to show current model and available models
 Include model name to change model
Gib den Modellnamen ein, um das Modell zu wechseln.
**.clear**
 Reset bot for everyone
Setzt den Bot für alle zurück.

View File

@ -1,8 +1,14 @@
"""
ollamarama-matrix: An AI chatbot for the Matrix chat protocol with infinite personalities.
Author: Dustin Whyte
Date: December 2023
# Beschreibung: ollamarama-matrix: An AI chatbot for the Matrix chat protocol with infinite personalities.
# Autor: Dustin Whyte (https://github.com/h1ddenpr0cess20/ollamarama-matrix)
# Erstellt am: December 2023
# Modifiziert von: Patrick Asmus
# Web: https://www.techniverse.net
# Git-Reposit.: https://git.techniverse.net/scriptos/ollamarama-matrix.git
# Version: 2.0
# Datum: 27.11.2024
# Modifikation: Logging eingebaut
#####################################################
"""
from nio import AsyncClient, MatrixRoom, RoomMessageText
@ -12,30 +18,35 @@ import asyncio
import requests
import markdown
class ollamarama:
def __init__(self):
#load config file
# Load config file
self.config_file = "config.json"
with open(self.config_file, "r") as f:
config = json.load(f)
f.close()
self.server, self.username, self.password, self.channels, self.admins = config["matrix"].values()
self.client = AsyncClient(self.server, self.username)
# time program started and joined channels
# Time program started and joined channels
self.join_time = datetime.datetime.now()
# store chat history
# Store chat history
self.messages = {}
# API URL
self.api_url = config["ollama"]["api_base"] + "/api/chat"
print(f"API URL: {self.api_url}")
# Model configuration
self.models = config["ollama"]["models"]
self.default_model = self.models[config["ollama"]["default_model"]]
self.model = self.default_model
print(f"Default model: {self.model}")
# Options
self.temperature, self.top_p, self.repeat_penalty = config["ollama"]["options"].values()
self.defaults = {
"temperature": self.temperature,
@ -47,101 +58,101 @@ class ollamarama:
self.personality = self.default_personality
self.prompt = config["ollama"]["prompt"]
# get the display name for a user
async def display_name(self, user):
try:
name = await self.client.get_displayname(user)
return name.displayname
except Exception as e:
print(f"Error fetching display name: {e}")
return user
# simplifies sending messages to the channel
async def send_message(self, channel, message):
await self.client.room_send(
room_id=channel,
message_type="m.room.message",
content={
"msgtype": "m.text",
"msgtype": "m.text",
"body": message,
"format": "org.matrix.custom.html",
"formatted_body": markdown.markdown(message, extensions=['fenced_code', 'nl2br'])},
"formatted_body": markdown.markdown(message, extensions=["fenced_code", "nl2br"]),
},
)
# add messages to the history dictionary
async def add_history(self, role, channel, sender, message):
if channel not in self.messages:
self.messages[channel] = {}
if sender not in self.messages[channel]:
self.messages[channel][sender] = [
{"role": "system", "content": self.prompt[0] + self.personality + self.prompt[1]}
]
]
self.messages[channel][sender].append({"role": role, "content": message})
#trim history
# Trim history
if len(self.messages[channel][sender]) > 24:
if self.messages[channel][sender][0]["role"] == "system":
del self.messages[channel][sender][1:3]
else:
del self.messages[channel][sender][0:2]
#generate Ollama model response
async def respond(self, channel, sender, message, sender2=None):
try:
data = {
"model": self.model,
"messages": message,
"model": self.model,
"messages": message,
"stream": False,
"options": {
"top_p": self.top_p,
"temperature": self.temperature,
"repeat_penalty": self.repeat_penalty
}
}
response = requests.post(self.api_url, json=data, timeout=300) #may need to increase for larger models, only tested on small models
"repeat_penalty": self.repeat_penalty,
},
}
# Log the data being sent
print(f"Sending data to API: {json.dumps(data, indent=2)}")
response = requests.post(self.api_url, json=data, timeout=300)
response.raise_for_status()
data = response.json()
# Log the API response
print(f"API response: {json.dumps(data, indent=2)}")
except Exception as e:
await self.send_message(channel, "Something went wrong")
print(e)
error_message = f"Error communicating with Ollama API: {e}"
await self.send_message(channel, error_message)
print(error_message)
else:
response_text = data["message"]["content"]
await self.add_history("assistant", channel, sender, response_text)
# .x function was used
if sender2:
display_name = await self.display_name(sender2)
# .ai was used
else:
display_name = await self.display_name(sender)
display_name = await self.display_name(sender2 if sender2 else sender)
response_text = f"**{display_name}**:\n{response_text.strip()}"
try:
await self.send_message(channel, response_text)
except Exception as e:
print(e)
#set personality or custom system prompt
except Exception as e:
print(f"Error sending message: {e}")
async def set_prompt(self, channel, sender, persona=None, custom=None, respond=True):
#clear existing history
try:
self.messages[channel][sender].clear()
except:
except KeyError:
pass
if persona != None and persona != "":
# combine personality with prompt parts
if persona:
prompt = self.prompt[0] + persona + self.prompt[1]
if custom != None and custom != "":
elif custom:
prompt = custom
await self.add_history("system", channel, sender, prompt)
if respond:
await self.add_history("user", channel, sender, "introduce yourself")
await self.respond(channel, sender, self.messages[channel][sender])
async def ai(self, channel, message, sender, x=False):
try:
if x and message[2]:
if x and len(message) > 2:
name = message[1]
message = message[2:]
if channel in self.messages:
@ -150,126 +161,58 @@ class ollamarama:
username = await self.display_name(user)
if name == username:
name_id = user
except:
except Exception as e:
print(f"Error in .x command: {e}")
name_id = name
await self.add_history("user", channel, name_id, ' '.join(message))
await self.add_history("user", channel, name_id, " ".join(message))
await self.respond(channel, name_id, self.messages[channel][name_id], sender)
else:
await self.add_history("user", channel, sender, ' '.join(message[1:]))
await self.add_history("user", channel, sender, " ".join(message[1:]))
await self.respond(channel, sender, self.messages[channel][sender])
except:
pass
async def reset(self, channel, sender, sender_display, stock=False):
if channel in self.messages:
try:
self.messages[channel][sender].clear()
except:
self.messages[channel] = {}
self.messages[channel][sender] = []
if not stock:
await self.send_message(channel, f"{self.bot_id} reset to default for {sender_display}")
await self.set_prompt(channel, sender, persona=self.personality, respond=False)
else:
await self.send_message(channel, f"Stock settings applied for {sender_display}")
async def help_menu(self, channel, sender_display):
with open("help.txt", "r") as f:
help_menu, help_admin = f.read().split("~~~")
f.close()
await self.send_message(channel, help_menu)
if sender_display in self.admins:
await self.send_message(channel, help_admin)
except Exception as e:
print(f"Error in .ai command: {e}")
async def change_model(self, channel, model=False):
with open(self.config_file, "r") as f:
config = json.load(f)
f.close()
self.models = config["ollama"]["models"]
if model:
try:
if model in self.models:
self.model = self.models[model]
elif model == 'reset':
self.model = self.default_model
await self.send_message(channel, f"Model set to **{self.model}**")
except:
pass
else:
current_model = f"**Current model**: {self.model}\n**Available models**: {', '.join(sorted(list(self.models)))}"
await self.send_message(channel, current_model)
async def clear(self, channel):
self.messages.clear()
self.model = self.default_model
self.personality = self.default_personality
self.temperature, self.top_p, self.repeat_penalty = self.defaults.values()
await self.send_message(channel, "Bot has been reset for everyone")
async def handle_message(self, message, sender, sender_display, channel):
user_commands = {
".ai": lambda: self.ai(channel, message, sender),
f"{self.bot_id}:": lambda: self.ai(channel, message, sender),
".x": lambda: self.ai(channel, message, sender, x=True),
".persona": lambda: self.set_prompt(channel, sender, persona=' '.join(message[1:])),
".custom": lambda: self.set_prompt(channel, sender, custom=' '.join(message[1:])),
".reset": lambda: self.reset(channel, sender, sender_display),
".stock": lambda: self.reset(channel, sender, sender_display, stock=True),
".help": lambda: self.help_menu(channel, sender_display),
".reset": lambda: self.set_prompt(channel, sender, persona=self.personality, respond=False),
}
admin_commands = {
".model": lambda: self.change_model(channel, model=message[1] if len(message) > 1 else False),
".clear": lambda: self.clear(channel),
}
#may add back temperature controls later, per user, for now you can just change that in config on the fly
command = message[0]
if command in user_commands:
action = user_commands[command]
await action()
if sender_display in self.admins and command in admin_commands:
action = admin_commands[command]
await action()
async def message_callback(self, room: MatrixRoom, event: RoomMessageText):
if isinstance(event, RoomMessageText):
message_time = event.server_timestamp / 1000
message_time = datetime.datetime.fromtimestamp(message_time)
message = event.body
message = message.split(" ")
message_time = datetime.datetime.fromtimestamp(event.server_timestamp / 1000)
message = event.body.split(" ")
sender = event.sender
sender_display = await self.display_name(sender)
channel = room.room_id
#check if the message was sent after joining and not by the bot
if message_time > self.join_time and sender != self.username:
try:
await self.handle_message(message, sender, sender_display, channel)
except:
pass
except Exception as e:
print(f"Error handling message: {e}")
async def main(self):
# Login, print "Logged in as @alice:example.org device id: RANDOMDID"
print(await self.client.login(self.password))
# get account display name
self.bot_id = await self.display_name(self.username)
# join channels
for channel in self.channels:
try:
await self.client.join(channel)
print(f"{self.bot_id} joined {channel}")
except:
print(f"Couldn't join {channel}")
# start listening for messages
self.client.add_event_callback(self.message_callback, RoomMessageText)
except Exception as e:
print(f"Couldn't join {channel}: {e}")
self.client.add_event_callback(self.message_callback, RoomMessageText)
await self.client.sync_forever(timeout=30000, full_state=True)
await self.client.sync_forever(timeout=30000, full_state=True)
if __name__ == "__main__":
ollamarama = ollamarama()
asyncio.get_event_loop().run_until_complete(ollamarama.main())
asyncio.run(ollamarama.main())

3
bin/requirements.txt Normal file
View File

@ -0,0 +1,3 @@
matrix-nio
requests
markdown

View File

@ -19,7 +19,7 @@
},
"ollama":
{
"api_base": "http://localhost:11434",
"api_base": "http://ollama:11434",
"options":
{
"temperature": 0.8,

37
docker-compose.yaml Normal file
View File

@ -0,0 +1,37 @@
services:
ollama:
image: ollama/ollama
build:
context: .
dockerfile: Dockerfile.ollama
hostname: ollama
container_name: ollama
networks:
dockernet:
ipv4_address: 172.16.0.51
ports:
- "11434:11434"
volumes:
- ./data/ollama/history:/root/.ollama
restart: unless-stopped
matrix-chatbot:
image: matrix-chatbot:latest
build:
context: .
dockerfile: Dockerfile.chatbot
container_name: matrix-chatbot
hostname: matrix-chatbot
networks:
dockernet:
ipv4_address: 172.16.0.50
depends_on:
- ollama
volumes:
- ./data/chatbot/config.json:/app/config.json:ro
restart: unless-stopped
networks:
dockernet:
external: true

View File

@ -1,33 +0,0 @@
**.ai _message_** or **botname: _message_**
 Basic usage.
**.x _user_ _message_**
 This allows you to talk to another user's chat history.
 _user_ is the display name of the user whose history you want to use
**.persona _personality_**
 Changes the personality. It can be a character, personality type, object, idea, whatever. Use your imagination.
**.custom _prompt_**
 Allows use of a custom system prompt instead of the roleplaying prompt
**.reset**
 Clear history and reset to preset personality
**.stock**
 Clear history and use without a system prompt
**Available at** https://github.com/h1ddenpr0cess20/ollamarama-matrix
~~~
**Admin only commands**
**.model _model_**
 Omit model name to show current model and available models
 Include model name to change model
**.clear**
 Reset bot for everyone

12
start.sh Normal file
View File

@ -0,0 +1,12 @@
#!/bin/bash
set -x
# Server starten
ollama serve&
# Modelle installieren
ollama pull llama3.1:8b-instruct-q5_K_M
ollama pull llama3.2
wait