matrix_ai_integrations_with_baibot #14

Merged
armistace merged 4 commits from matrix_ai_integrations_with_baibot into master 2025-06-30 17:18:13 +10:00

View File

@ -0,0 +1,114 @@
Title: Intergrating Ollama and Matrix with Baibot
Date: 2025-06-25 20:00
Modified: 2025-06-30 08:00
Category: AI, Data, Matrix
Tags: ai, kubernetes, matrix
Slug: ollama-matrix-integration
Authors: Andrew Ridgway
Summary: Integrating a Local LLM to a personal matrix server all the fun AND data sovereignty
### _Human Introduction_
I've been experimenting with AI and integrations I'm particuarly excited by the idea of using LLM's to integrate between different systems (Stay tuned for a blog [MCP](https://modelcontextprotocol.io/introduction) at some point in the future!)
Below I've thrown together some notes and had AI build a very quick how to on a cool little project that took next to no time to put together that I thought might be interesting for the group.. Enjoy!
# Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs
Alright, so Ive been messing around with this cool project called **baibot**, which is a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. If youre anything like me, you run your own Matrix server to keep things private and under control—whether its for family communication or interacting with the tech community. But one day, I thought, “Why not have my LLMs right where Im already managing everything else?” Enter baibot.
**Setting Up My Own Matrix Server with baibot**
First off, Ive got a home Matrix server running Element. Integrating baibot into this environment makes sense because it allows me to connect directly via the same platform. The key was getting the configuration right using examples from [baibots GitHub](https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml). For instance, connecting to an Ollama gemma3 model with a specific prompt ensures its lighthearted yet responsive:
```yaml
base_url: http://<my_ollama_ip>:11434/v1
text_generation:
model_id: gemma3:latest
prompt: 'You are a lighthearted bot...'
temperature: 0.9
max_response_tokens: 4096
max_context_tokens: 128000
```
This gives me precise control over the bots behavior, ensuring each instance in Matrix rooms behaves exactly as intended.
**Deploying to Kubernetes**
To ensure reliability, I used Kubernetes. Here's a breakdown of the key files:
* **Deployment.yaml**: Manages pod replicas, security contexts, and volume mounts for persistence.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ridgway-bot
name: ridgway-bot
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
- image: ghcr.io/etkecc/baibot:v1.7.4
name: baibot
volumeMounts:
- name: ridgway-bot-cm
mountPath: /app/config.yml
- name: ridgway-bot-pv
mountPath: /data
volumes:
- name: ridgway-bot-cm
configMap:
name: ridgway-bot
- name: ridgway-bot-pv
persistentVolumeClaim:
claimName: ridgway-bot-storage
```
* **Persistent Volume Claim (PVC)** ensures data storage for baibot.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ridgway-bot-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
```
The deployment script handles namespace creation, config maps, PVCs, and waits for the pod to be ready before copying data.
**Integrating with OpenWebUI for RAG**
Another cool aspect is integrating baibot with **OpenWebUI**, which acts as an OpenAI-compatible API. This allows me to leverage models Ive created in OpenWebUI that include knowledge bases (RAG). The config here uses OpenWebUIs endpoints:
```yaml
base_url: 'https://<my-openwebui-endpoint>/api/'
api_key: <my-openwebui-api-key>
text_generation:
model_id: andrew-knowledge-base
prompt: 'Your name is Rodergast...'
```
This setup lets me access RAG capabilities directly within Matrix chats, all without writing a single line of code. Its like having my very own AI research assistant right there in the chatroom.
**Future Steps and Challenges**
Now that baibot is up and running, Im already thinking about expanding its use cases. The next step might be integrating it with **Home Assistant** for alarm notifications or other automation tasks. However, my current setup uses an older gaming PC, which struggles with computational demands. This could lead to a rearchitecting effort—perhaps moving to a dedicated server or optimizing the hardware.
**Conclusion**
Baibot has been a fantastic tool for experimenting with AI integrations in Matrix. By leveraging existing infrastructure and OpenWebUIs capabilities, Ive achieved full control over data privacy and customization. The next frontier is expanding these integrations into more practical applications like home automation. Stay tuned for updates!
**Final Thoughts**
Its incredibly rewarding to see how open-source projects like baibot democratize AI access. Whether youre a hobbyist or a pro, having tools that let you run LLMs locally without vendor lock-in is game-changing. If youre interested in diving deeper, check out the [baibot GitHub](https://github.com/etkecc/baibot) and explore its documentation. Happy coding!