Compare commits
No commits in common. "main" and "dev" have entirely different histories.
@ -1,10 +0,0 @@
|
||||
FROM python:3.10-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY bin/requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY bin/ollamarama.py .
|
||||
|
||||
CMD ["python", "ollamarama.py"]
|
@ -1,7 +0,0 @@
|
||||
FROM ollama/ollama
|
||||
|
||||
COPY start.sh /start.sh
|
||||
RUN chmod +x /start.sh
|
||||
|
||||
# Standardbefehl zum Starten des Servers
|
||||
ENTRYPOINT ["/start.sh"]
|
116
README.md
116
README.md
@ -1,100 +1,84 @@
|
||||
# ollamarama-matrix-chatbot
|
||||
# ollamarama-matrix
|
||||
Ollamarama is an AI chatbot for the [Matrix](https://matrix.org/) chat protocol using Ollama. It can roleplay as almost anything you can think of. You can set any default personality you would like. It can be changed at any time, and each user has their own separate chat history with their chosen personality setting. Users can interact with each others chat histories for collaboration if they would like, but otherwise, conversations are separated, per channel, per user.
|
||||
|
||||
### Zuerst das Wichtigste:
|
||||
This is based on my earlier project, [infinigpt-matrix](https://github.com/h1ddenpr0cess20/infinigpt-matrix), which uses OpenAI and costs money to use. (Now updated with OpenAI/Ollama model switching)
|
||||
|
||||
Dies ist ein Fork von [Dustin Whyte](https://github.com/h1ddenpr0cess20/ollamarama-matrix), welchen ich anschließend in Docker inpelementiert habe.
|
||||
|
||||
|
||||
Ollamarama ist ein KI-Chatbot für das [Matrix](https://matrix.org/) Chatprotokoll mit Ollama. Er kann fast alles spielen, was Du dir vorstellen kannst. Du kannst jede Standardpersönlichkeit einstellen, die du möchtest. Sie kann jederzeit geändert werden, und jeder Benutzer hat seinen eigenen Chatverlauf mit der von ihm gewählten Persönlichkeitseinstellung. Die Benutzer können mit den Chatverläufen der anderen interagieren, um zusammenzuarbeiten, wenn sie das möchten, aber ansonsten sind die Unterhaltungen getrennt, pro Kanal und pro Benutzer.
|
||||
Dieser Chatbot kommt zusammen mit dem Ollama Docker, zu finden [hier](https://hub.docker.com/r/ollama/ollama).
|
||||
IRC version available at [ollamarama-irc](https://github.com/h1ddenpr0cess20/ollamarama-irc)
|
||||
|
||||
Terminal-based version at [ollamarama](https://github.com/h1ddenpr0cess20/ollamarama)
|
||||
|
||||
## Setup
|
||||
|
||||
Installiere dir zuerst Docker. Dies kannst du mit [diesem Script](https://github.com/h1ddenpr0cess20/ollamarama-matrix) machen.
|
||||
Install and familiarize yourself with [Ollama](https://ollama.ai/), make sure you can run local LLMs, etc.
|
||||
|
||||
Anschließend clonst du mein Projekt:
|
||||
|
||||
```bash
|
||||
git clone https://git.techniverse.net/scriptos/ollamarama-matrix
|
||||
You can install and update it with this command:
|
||||
```
|
||||
|
||||
Anschließend ins Verzeichnis wechseln und die Konfigurationsdatei für den Matrix-Chatbot konfigurieren:
|
||||
|
||||
```bash
|
||||
cd ollamarama-matrix && nano data/chatbot/config.json)
|
||||
```
|
||||
|
||||
In der `config.json` werden die Zugangsdaten für den Chatbot gepflegt. Dieser muss im Vorfeld auf dem Matrix Server erstellt werden. Dies kann [hier](https://app.element.io/) gemacht werden.
|
||||
Weiterhin können in dieser Konfigurationsdatei weitere Modelle gepflegt werden, welche vom Chatbot anschließend verwendet werden könnten.
|
||||
|
||||
In der Datei `start.sh` können weitere Modelle gepflegt werden, welche dann nach dem Starten vom Ollama Docker Container runtergeladen werden.
|
||||
|
||||
Weitere Modelle können [hier](https://ollama.ai/library) geladen werden.
|
||||
|
||||
Wenn die Konfiguration gemacht ist, kann der Docker mit
|
||||
|
||||
```bash
|
||||
docker-compose up --build
|
||||
```
|
||||
|
||||
gestartet werden.
|
||||
Dies ist ideal um gleich auch die Logs zu sehen.
|
||||
|
||||
Nun tritt der Bot automatisch den konfigurierten Channels bei und sollte dir im Idealfall direkt zu Verfügung stehen.
|
||||
|
||||
Den Docker startest du richtig mit
|
||||
|
||||
```bash
|
||||
docker-compose -d
|
||||
```
|
||||
|
||||
Die erste Nachricht versendest du mit `.ai message`
|
||||
|
||||
Ein Beispiel:
|
||||
|
||||
```bash
|
||||
.ai Hallo, wie geht es dir?
|
||||
curl https://ollama.ai/install.sh | sh
|
||||
```
|
||||
|
||||
|
||||
# Verwendung
|
||||
Once it's all set up, you'll need to [download the models](https://ollama.ai/library) you want to use. You can play with the available ones and see what works best for you. Add those to the config.json file. If you want to use the ones I've included, just run ollama pull _modelname_ for each.
|
||||
|
||||
**.ai _nachricht_** oder **botname: _nachricht_**
|
||||
|
||||
Grundlegende Verwendung.
|
||||
You'll also need to install matrix-nio
|
||||
```
|
||||
pip3 install matrix-nio
|
||||
```
|
||||
|
||||
**.x _benutzer_ _nachricht_**
|
||||
Set up a [Matrix account](https://app.element.io/) for your bot. You'll need the server, username and password.
|
||||
|
||||
Erlaubt es dir, auf die Chat-Historie eines anderen Benutzers zuzugreifen.
|
||||
Add those to the config.json file.
|
||||
|
||||
_benutzer_ ist der Anzeigename des Benutzers, dessen Historie du verwenden möchtest.
|
||||
```
|
||||
python3 ollamarama.py
|
||||
```
|
||||
|
||||
**.persona _persönlichkeit_**
|
||||
## Use
|
||||
|
||||
Ändert die Persönlichkeit. Es kann eine Figur, ein Persönlichkeitstyp, ein Objekt, eine Idee, was auch immer sein. Nutze deine Fantasie.
|
||||
|
||||
**.custom _eingabeaufforderung_**
|
||||
**.ai _message_** or **botname: _message_**
|
||||
|
||||
Erlaubt die Verwendung einer benutzerdefinierten Systemaufforderung anstelle der Rollenspielaufforderung.
|
||||
 Basic usage.
|
||||
|
||||
|
||||
**.x _user_ _message_**
|
||||
|
||||
 This allows you to talk to another user's chat history.
|
||||
|
||||
 _user_ is the display name of the user whose history you want to use
|
||||
|
||||
|
||||
**.persona _personality_**
|
||||
|
||||
 Changes the personality. It can be a character, personality type, object, idea, whatever. Use your imagination.
|
||||
|
||||
|
||||
**.custom _prompt_**
|
||||
|
||||
 Allows use of a custom system prompt instead of the roleplaying prompt
|
||||
|
||||
**.reset**
|
||||
|
||||
Verlauf löschen und auf voreingestellte Persönlichkeit zurücksetzen.
|
||||
 Clear history and reset to preset personality
|
||||
|
||||
|
||||
**.stock**
|
||||
|
||||
Verlauf löschen und ohne Systemaufforderung verwenden.
|
||||
 Clear history and use without a system prompt
|
||||
|
||||
**Nur für Admins**
|
||||
|
||||
**.model _modell_**
|
||||
**Admin only commands**
|
||||
|
||||
Lasse den Modellnamen weg, um das aktuelle Modell und verfügbare Modelle anzuzeigen.
|
||||
|
||||
Gib den Modellnamen ein, um das Modell zu wechseln.
|
||||
**.model _model_**
|
||||
|
||||
 Omit model name to show current model and available models
|
||||
|
||||
 Include model name to change model
|
||||
|
||||
|
||||
**.clear**
|
||||
|
||||
Setzt den Bot für alle zurück.
|
||||
 Reset bot for everyone
|
||||
|
||||
|
||||
|
@ -1,3 +0,0 @@
|
||||
matrix-nio
|
||||
requests
|
||||
markdown
|
@ -19,7 +19,7 @@
|
||||
},
|
||||
"ollama":
|
||||
{
|
||||
"api_base": "http://ollama:11434",
|
||||
"api_base": "http://localhost:11434",
|
||||
"options":
|
||||
{
|
||||
"temperature": 0.8,
|
@ -1,37 +0,0 @@
|
||||
services:
|
||||
|
||||
ollama:
|
||||
image: ollama/ollama
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.ollama
|
||||
hostname: ollama
|
||||
container_name: ollama
|
||||
networks:
|
||||
dockernet:
|
||||
ipv4_address: 172.16.0.51
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- ./data/ollama/history:/root/.ollama
|
||||
restart: unless-stopped
|
||||
|
||||
matrix-chatbot:
|
||||
image: matrix-chatbot:latest
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.chatbot
|
||||
container_name: matrix-chatbot
|
||||
hostname: matrix-chatbot
|
||||
networks:
|
||||
dockernet:
|
||||
ipv4_address: 172.16.0.50
|
||||
depends_on:
|
||||
- ollama
|
||||
volumes:
|
||||
- ./data/chatbot/config.json:/app/config.json:ro
|
||||
restart: unless-stopped
|
||||
|
||||
networks:
|
||||
dockernet:
|
||||
external: true
|
33
help.txt
Normal file
33
help.txt
Normal file
@ -0,0 +1,33 @@
|
||||
|
||||
**.ai _message_** or **botname: _message_**
|
||||
 Basic usage.
|
||||
|
||||
**.x _user_ _message_**
|
||||
 This allows you to talk to another user's chat history.
|
||||
 _user_ is the display name of the user whose history you want to use
|
||||
|
||||
**.persona _personality_**
|
||||
 Changes the personality. It can be a character, personality type, object, idea, whatever. Use your imagination.
|
||||
|
||||
**.custom _prompt_**
|
||||
 Allows use of a custom system prompt instead of the roleplaying prompt
|
||||
|
||||
**.reset**
|
||||
 Clear history and reset to preset personality
|
||||
|
||||
**.stock**
|
||||
 Clear history and use without a system prompt
|
||||
|
||||
|
||||
**Available at** https://github.com/h1ddenpr0cess20/ollamarama-matrix
|
||||
~~~
|
||||
|
||||
**Admin only commands**
|
||||
|
||||
**.model _model_**
|
||||
 Omit model name to show current model and available models
|
||||
 Include model name to change model
|
||||
|
||||
**.clear**
|
||||
 Reset bot for everyone
|
||||
|
@ -1,14 +1,8 @@
|
||||
"""
|
||||
# Beschreibung: ollamarama-matrix: An AI chatbot for the Matrix chat protocol with infinite personalities.
|
||||
# Autor: Dustin Whyte (https://github.com/h1ddenpr0cess20/ollamarama-matrix)
|
||||
# Erstellt am: December 2023
|
||||
# Modifiziert von: Patrick Asmus
|
||||
# Web: https://www.techniverse.net
|
||||
# Git-Reposit.: https://git.techniverse.net/scriptos/ollamarama-matrix.git
|
||||
# Version: 2.0
|
||||
# Datum: 27.11.2024
|
||||
# Modifikation: Logging eingebaut
|
||||
#####################################################
|
||||
ollamarama-matrix: An AI chatbot for the Matrix chat protocol with infinite personalities.
|
||||
|
||||
Author: Dustin Whyte
|
||||
Date: December 2023
|
||||
"""
|
||||
|
||||
from nio import AsyncClient, MatrixRoom, RoomMessageText
|
||||
@ -18,35 +12,30 @@ import asyncio
|
||||
import requests
|
||||
import markdown
|
||||
|
||||
|
||||
class ollamarama:
|
||||
def __init__(self):
|
||||
# Load config file
|
||||
#load config file
|
||||
self.config_file = "config.json"
|
||||
with open(self.config_file, "r") as f:
|
||||
config = json.load(f)
|
||||
f.close()
|
||||
|
||||
self.server, self.username, self.password, self.channels, self.admins = config["matrix"].values()
|
||||
|
||||
self.client = AsyncClient(self.server, self.username)
|
||||
|
||||
# Time program started and joined channels
|
||||
# time program started and joined channels
|
||||
self.join_time = datetime.datetime.now()
|
||||
|
||||
# Store chat history
|
||||
|
||||
# store chat history
|
||||
self.messages = {}
|
||||
|
||||
# API URL
|
||||
self.api_url = config["ollama"]["api_base"] + "/api/chat"
|
||||
print(f"API URL: {self.api_url}")
|
||||
|
||||
# Model configuration
|
||||
self.models = config["ollama"]["models"]
|
||||
self.default_model = self.models[config["ollama"]["default_model"]]
|
||||
self.model = self.default_model
|
||||
print(f"Default model: {self.model}")
|
||||
|
||||
# Options
|
||||
self.temperature, self.top_p, self.repeat_penalty = config["ollama"]["options"].values()
|
||||
self.defaults = {
|
||||
"temperature": self.temperature,
|
||||
@ -58,101 +47,101 @@ class ollamarama:
|
||||
self.personality = self.default_personality
|
||||
self.prompt = config["ollama"]["prompt"]
|
||||
|
||||
# get the display name for a user
|
||||
async def display_name(self, user):
|
||||
try:
|
||||
name = await self.client.get_displayname(user)
|
||||
return name.displayname
|
||||
except Exception as e:
|
||||
print(f"Error fetching display name: {e}")
|
||||
return user
|
||||
|
||||
# simplifies sending messages to the channel
|
||||
async def send_message(self, channel, message):
|
||||
await self.client.room_send(
|
||||
room_id=channel,
|
||||
message_type="m.room.message",
|
||||
content={
|
||||
"msgtype": "m.text",
|
||||
"msgtype": "m.text",
|
||||
"body": message,
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": markdown.markdown(message, extensions=["fenced_code", "nl2br"]),
|
||||
},
|
||||
"formatted_body": markdown.markdown(message, extensions=['fenced_code', 'nl2br'])},
|
||||
)
|
||||
|
||||
# add messages to the history dictionary
|
||||
async def add_history(self, role, channel, sender, message):
|
||||
if channel not in self.messages:
|
||||
self.messages[channel] = {}
|
||||
if sender not in self.messages[channel]:
|
||||
self.messages[channel][sender] = [
|
||||
{"role": "system", "content": self.prompt[0] + self.personality + self.prompt[1]}
|
||||
]
|
||||
]
|
||||
self.messages[channel][sender].append({"role": role, "content": message})
|
||||
|
||||
# Trim history
|
||||
#trim history
|
||||
if len(self.messages[channel][sender]) > 24:
|
||||
if self.messages[channel][sender][0]["role"] == "system":
|
||||
del self.messages[channel][sender][1:3]
|
||||
else:
|
||||
del self.messages[channel][sender][0:2]
|
||||
|
||||
#generate Ollama model response
|
||||
async def respond(self, channel, sender, message, sender2=None):
|
||||
try:
|
||||
data = {
|
||||
"model": self.model,
|
||||
"messages": message,
|
||||
"model": self.model,
|
||||
"messages": message,
|
||||
"stream": False,
|
||||
"options": {
|
||||
"top_p": self.top_p,
|
||||
"temperature": self.temperature,
|
||||
"repeat_penalty": self.repeat_penalty,
|
||||
},
|
||||
}
|
||||
|
||||
# Log the data being sent
|
||||
print(f"Sending data to API: {json.dumps(data, indent=2)}")
|
||||
|
||||
response = requests.post(self.api_url, json=data, timeout=300)
|
||||
"repeat_penalty": self.repeat_penalty
|
||||
}
|
||||
}
|
||||
response = requests.post(self.api_url, json=data, timeout=300) #may need to increase for larger models, only tested on small models
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
# Log the API response
|
||||
print(f"API response: {json.dumps(data, indent=2)}")
|
||||
|
||||
|
||||
except Exception as e:
|
||||
error_message = f"Error communicating with Ollama API: {e}"
|
||||
await self.send_message(channel, error_message)
|
||||
print(error_message)
|
||||
await self.send_message(channel, "Something went wrong")
|
||||
print(e)
|
||||
else:
|
||||
response_text = data["message"]["content"]
|
||||
await self.add_history("assistant", channel, sender, response_text)
|
||||
|
||||
display_name = await self.display_name(sender2 if sender2 else sender)
|
||||
response_text = f"**{display_name}**:\n{response_text.strip()}"
|
||||
# .x function was used
|
||||
if sender2:
|
||||
display_name = await self.display_name(sender2)
|
||||
# .ai was used
|
||||
else:
|
||||
display_name = await self.display_name(sender)
|
||||
|
||||
response_text = f"**{display_name}**:\n{response_text.strip()}"
|
||||
|
||||
try:
|
||||
await self.send_message(channel, response_text)
|
||||
except Exception as e:
|
||||
print(f"Error sending message: {e}")
|
||||
|
||||
except Exception as e:
|
||||
print(e)
|
||||
|
||||
#set personality or custom system prompt
|
||||
async def set_prompt(self, channel, sender, persona=None, custom=None, respond=True):
|
||||
#clear existing history
|
||||
try:
|
||||
self.messages[channel][sender].clear()
|
||||
except KeyError:
|
||||
except:
|
||||
pass
|
||||
|
||||
if persona:
|
||||
if persona != None and persona != "":
|
||||
# combine personality with prompt parts
|
||||
prompt = self.prompt[0] + persona + self.prompt[1]
|
||||
elif custom:
|
||||
if custom != None and custom != "":
|
||||
prompt = custom
|
||||
|
||||
await self.add_history("system", channel, sender, prompt)
|
||||
|
||||
if respond:
|
||||
await self.add_history("user", channel, sender, "introduce yourself")
|
||||
await self.respond(channel, sender, self.messages[channel][sender])
|
||||
|
||||
async def ai(self, channel, message, sender, x=False):
|
||||
try:
|
||||
if x and len(message) > 2:
|
||||
if x and message[2]:
|
||||
name = message[1]
|
||||
message = message[2:]
|
||||
if channel in self.messages:
|
||||
@ -161,58 +150,126 @@ class ollamarama:
|
||||
username = await self.display_name(user)
|
||||
if name == username:
|
||||
name_id = user
|
||||
except Exception as e:
|
||||
print(f"Error in .x command: {e}")
|
||||
except:
|
||||
name_id = name
|
||||
|
||||
await self.add_history("user", channel, name_id, " ".join(message))
|
||||
await self.add_history("user", channel, name_id, ' '.join(message))
|
||||
await self.respond(channel, name_id, self.messages[channel][name_id], sender)
|
||||
else:
|
||||
await self.add_history("user", channel, sender, " ".join(message[1:]))
|
||||
await self.add_history("user", channel, sender, ' '.join(message[1:]))
|
||||
await self.respond(channel, sender, self.messages[channel][sender])
|
||||
except Exception as e:
|
||||
print(f"Error in .ai command: {e}")
|
||||
except:
|
||||
pass
|
||||
|
||||
async def reset(self, channel, sender, sender_display, stock=False):
|
||||
if channel in self.messages:
|
||||
try:
|
||||
self.messages[channel][sender].clear()
|
||||
except:
|
||||
self.messages[channel] = {}
|
||||
self.messages[channel][sender] = []
|
||||
if not stock:
|
||||
await self.send_message(channel, f"{self.bot_id} reset to default for {sender_display}")
|
||||
await self.set_prompt(channel, sender, persona=self.personality, respond=False)
|
||||
else:
|
||||
await self.send_message(channel, f"Stock settings applied for {sender_display}")
|
||||
|
||||
async def help_menu(self, channel, sender_display):
|
||||
with open("help.txt", "r") as f:
|
||||
help_menu, help_admin = f.read().split("~~~")
|
||||
f.close()
|
||||
await self.send_message(channel, help_menu)
|
||||
if sender_display in self.admins:
|
||||
await self.send_message(channel, help_admin)
|
||||
|
||||
async def change_model(self, channel, model=False):
|
||||
with open(self.config_file, "r") as f:
|
||||
config = json.load(f)
|
||||
f.close()
|
||||
self.models = config["ollama"]["models"]
|
||||
if model:
|
||||
try:
|
||||
if model in self.models:
|
||||
self.model = self.models[model]
|
||||
elif model == 'reset':
|
||||
self.model = self.default_model
|
||||
await self.send_message(channel, f"Model set to **{self.model}**")
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
current_model = f"**Current model**: {self.model}\n**Available models**: {', '.join(sorted(list(self.models)))}"
|
||||
await self.send_message(channel, current_model)
|
||||
|
||||
async def clear(self, channel):
|
||||
self.messages.clear()
|
||||
self.model = self.default_model
|
||||
self.personality = self.default_personality
|
||||
self.temperature, self.top_p, self.repeat_penalty = self.defaults.values()
|
||||
await self.send_message(channel, "Bot has been reset for everyone")
|
||||
|
||||
async def handle_message(self, message, sender, sender_display, channel):
|
||||
user_commands = {
|
||||
".ai": lambda: self.ai(channel, message, sender),
|
||||
".reset": lambda: self.set_prompt(channel, sender, persona=self.personality, respond=False),
|
||||
f"{self.bot_id}:": lambda: self.ai(channel, message, sender),
|
||||
".x": lambda: self.ai(channel, message, sender, x=True),
|
||||
".persona": lambda: self.set_prompt(channel, sender, persona=' '.join(message[1:])),
|
||||
".custom": lambda: self.set_prompt(channel, sender, custom=' '.join(message[1:])),
|
||||
".reset": lambda: self.reset(channel, sender, sender_display),
|
||||
".stock": lambda: self.reset(channel, sender, sender_display, stock=True),
|
||||
".help": lambda: self.help_menu(channel, sender_display),
|
||||
}
|
||||
admin_commands = {
|
||||
".model": lambda: self.change_model(channel, model=message[1] if len(message) > 1 else False),
|
||||
".clear": lambda: self.clear(channel),
|
||||
}
|
||||
#may add back temperature controls later, per user, for now you can just change that in config on the fly
|
||||
|
||||
command = message[0]
|
||||
if command in user_commands:
|
||||
action = user_commands[command]
|
||||
await action()
|
||||
|
||||
if sender_display in self.admins and command in admin_commands:
|
||||
action = admin_commands[command]
|
||||
await action()
|
||||
|
||||
async def message_callback(self, room: MatrixRoom, event: RoomMessageText):
|
||||
if isinstance(event, RoomMessageText):
|
||||
message_time = datetime.datetime.fromtimestamp(event.server_timestamp / 1000)
|
||||
message = event.body.split(" ")
|
||||
message_time = event.server_timestamp / 1000
|
||||
message_time = datetime.datetime.fromtimestamp(message_time)
|
||||
message = event.body
|
||||
message = message.split(" ")
|
||||
sender = event.sender
|
||||
sender_display = await self.display_name(sender)
|
||||
channel = room.room_id
|
||||
|
||||
|
||||
#check if the message was sent after joining and not by the bot
|
||||
if message_time > self.join_time and sender != self.username:
|
||||
try:
|
||||
await self.handle_message(message, sender, sender_display, channel)
|
||||
except Exception as e:
|
||||
print(f"Error handling message: {e}")
|
||||
except:
|
||||
pass
|
||||
|
||||
async def main(self):
|
||||
# Login, print "Logged in as @alice:example.org device id: RANDOMDID"
|
||||
print(await self.client.login(self.password))
|
||||
self.bot_id = await self.display_name(self.username)
|
||||
|
||||
# get account display name
|
||||
self.bot_id = await self.display_name(self.username)
|
||||
|
||||
# join channels
|
||||
for channel in self.channels:
|
||||
try:
|
||||
await self.client.join(channel)
|
||||
print(f"{self.bot_id} joined {channel}")
|
||||
except Exception as e:
|
||||
print(f"Couldn't join {channel}: {e}")
|
||||
|
||||
|
||||
except:
|
||||
print(f"Couldn't join {channel}")
|
||||
|
||||
# start listening for messages
|
||||
self.client.add_event_callback(self.message_callback, RoomMessageText)
|
||||
await self.client.sync_forever(timeout=30000, full_state=True)
|
||||
|
||||
await self.client.sync_forever(timeout=30000, full_state=True)
|
||||
|
||||
if __name__ == "__main__":
|
||||
ollamarama = ollamarama()
|
||||
asyncio.run(ollamarama.main())
|
||||
asyncio.get_event_loop().run_until_complete(ollamarama.main())
|
||||
|
Loading…
x
Reference in New Issue
Block a user