matrix_ai_integrations_with_baibot #14

Merged
armistace merged 4 commits from matrix_ai_integrations_with_baibot into master 2025-06-30 17:18:13 +10:00
Showing only changes of commit ddf9ccd9aa - Show all commits

View File

@ -1,42 +1,28 @@
# Matrix AI Integrations with baibot: A Personal Journey **Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs**
**Introduction** Alright, so Ive been messing around with this cool project called **baibot**, which is a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. If youre anything like me, you run your own Matrix server to keep things private and under control—whether its for family communication or interacting with the tech community. But one day, I thought, “Why not have my LLMs right where Im already managing everything else?” Enter baibot.
Hey there, fellow tech enthusiasts! Im thrilled to share my latest adventure in integrating Artificial Intelligence into my self-hosted Matrix server using **baibot**, a locally deployable bot for LLMs. This setup not only enhances privacy but also allows precise control over interactions. Lets dive into the details of how this works and what it means for my daily Matrix experiences. **Setting Up My Own Matrix Server with baibot**
**The Setup: My Matrix Server** First off, Ive got a home Matrix server running Element. Integrating baibot into this environment makes sense because it allows me to connect directly via the same platform. The key was getting the configuration right using examples from [baibots GitHub](https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml). For instance, connecting to an Ollama gemma3 model with a specific prompt ensures its lighthearted yet responsive:
Ive been running a self-hosted Matrix server for years, using it for both personal notifications (like package deliveries) and community chats with family and friends. Baibot integrates seamlessly here, ensuring all interactions stay within my network, enhancing security and control.
**Why baibot?**
* **Local Control**: Full data sovereignty—no third-party servers touch sensitive information.
* **Flexibility**: Customizable bots per room using Elements interface.
* **OpenWebUI Integration**: Connects to my existing AI infrastructure for RAG (Retrieval-Augmented Generation) capabilities.
**Technical Breakdown**
**1. Configuring the Bot**
Baibot uses a `config.yml` file to define parameters like the LLM model, prompt, and token limits. Heres an example configuration:
```yaml ```yaml
base_url: http://192.168.178.45:11434/v1 base_url: http://192.168.178.45:11434/v1
text_generation: text_generation:
model_id: gemma3:latest model_id: gemma3:latest
prompt: 'You are a lighthearted bot...' prompt: 'You are a lighthearted bot...'
temperature: 0.9 temperature: 0.9
max_response_tokens: 4096 max_response_tokens: 4096
max_context_tokens: 128000 max_context_tokens: 128000
``` ```
This config points to my local OpenWebUI server, ensuring all interactions remain private. This gives me precise control over the bots behavior, ensuring each instance in Matrix rooms behaves exactly as intended.
**2. Deploying to Kubernetes** **Deploying to Kubernetes**
To run baibot alongside other services in my K8s cluster, I created a deployment YAML and Persistent Volume Claim (PVC) for data persistence: To ensure reliability, I used Kubernetes. Here's a breakdown of the key files:
**Deployment.yaml** * **Deployment.yaml**: Manages pod replicas, security contexts, and volume mounts for persistence.
```yaml ```yaml
apiVersion: apps/v1 apiVersion: apps/v1
@ -44,7 +30,7 @@ kind: Deployment
metadata: metadata:
labels: labels:
app: ridgway-bot app: ridgway-bot
name: ridgway-bot name: ridgway-bot
spec: spec:
replicas: 1 replicas: 1
strategy: strategy:
@ -53,24 +39,22 @@ spec:
spec: spec:
containers: containers:
- image: ghcr.io/etkecc/baibot:v1.7.4 - image: ghcr.io/etkecc/baibot:v1.7.4
env: name: baibot
- name: BAIBOT_PERSISTENCE_DATA_DIR_PATH
value: /data
volumeMounts: volumeMounts:
- name: ridgway-bot-cm - name: ridgway-bot-cm
mountPath: /app/config.yml mountPath: /app/config.yml
subPath: config.yml
- name: ridgway-bot-pv - name: ridgway-bot-pv
mountPath: /data mountPath: /data
volumes: volumes:
- name: ridgway-bot-cm - name: ridgway-bot-cm
configMap: configMap:
name: ridgway-bot name: ridgway-bot
- name: ridgway-bot-pv
persistentVolumeClaim: persistentVolumeClaim:
claimName: ridgway-bot-storage claimName: ridgway-bot-storage
``` ```
**PVC-ridgway-bot.yaml** * **Persistent Volume Claim (PVC)** ensures data storage for baibot.
```yaml ```yaml
apiVersion: v1 apiVersion: v1
@ -80,49 +64,35 @@ metadata:
spec: spec:
accessModes: accessModes:
- ReadWriteMany - ReadWriteMany
storageClassName: longhorn
resources: resources:
requests: requests:
storage: 500Mi storage: 500Mi
``` ```
**3. Deployment Script** The deployment script handles namespace creation, config maps, PVCs, and waits for the pod to be ready before copying data.
A simple script handles the deployment process: **Integrating with OpenWebUI for RAG**
```bash Another cool aspect is integrating baibot with **OpenWebUI**, which acts as an OpenAI-compatible API. This allows me to leverage models Ive created in OpenWebUI that include knowledge bases (RAG). The config here uses OpenWebUIs endpoints:
kubectl delete namespace ridgway-bot
kubectl create namespace ridgway-bot
kubectl -n ridgway-bot create configmap ridgway-bot --from-file=config.yml=./config.yml
kubectl apply -f pvc-ridgway-bot.yaml
kubectl apply -f deployment.yaml
sleep 90
kubectl cp data/* $(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n ridgway-bot | head -n 1):/data -n ridgway-bot
```
This ensures the bot starts correctly and persists data.
**Integration with OpenWebUI**
For rooms requiring RAG, baibot uses OpenWebUIs API:
```yaml ```yaml
base_url: 'https://ai.aridgwayweb.com/api/' base_url: 'https://ai.aridgwayweb.com/api/'
api_key: <my-openwebui-api-key> api_key: <my-openwebui-api-key>
text_generation: text_generation:
model_id: andrew-knowledge-base model_id: andrew-knowledge-base
prompt: 'Your name is Rodergast...'
``` ```
This configuration pulls context from my local knowledge base, ensuring relevant responses. This setup lets me access RAG capabilities directly within Matrix chats, all without writing a single line of code. Its like having my very own AI research assistant right there in the chatroom.
**Challenges and Future Plans** **Future Steps and Challenges**
While the setup works smoothly, hardware limitations are a concern. My current server is a 10-year-old machine struggling with AI demands. Upgrades are planned: Now that baibot is up and running, Im already thinking about expanding its use cases. The next step might be integrating it with **Home Assistant** for alarm notifications or other automation tasks. However, my current setup uses an older gaming PC, which struggles with computational demands. This could lead to a rearchitecting effort—perhaps moving to a dedicated server or optimizing the hardware.
* **Hardware Upgrade**: Transitioning to a more powerful server (e.g., my gaming PC).
* **Blade Server Exploration**: For scalable performance.
* **Future Blog Post**: Detailing the hardware architecture needed for modern AI workloads.
**Conclusion** **Conclusion**
Baibot revolutionizes how I interact with AI on Matrix. Its not just about running bots; its about taking control of your data and interactions. Whether youre a Matrix server admin or interested in local AI integrations, this setup offers flexibility and security. Stay tuned for more updates! Baibot has been a fantastic tool for experimenting with AI integrations in Matrix. By leveraging existing infrastructure and OpenWebUIs capabilities, Ive achieved full control over data privacy and customization. The next frontier is expanding these integrations into more practical applications like home automation. Stay tuned for updates!
**Final Thoughts**
Its incredibly rewarding to see how open-source projects like baibot democratize AI access. Whether youre a hobbyist or a pro, having tools that let you run LLMs locally without vendor lock-in is game-changing. If youre interested in diving deeper, check out the [baibot GitHub](https://github.com/etkecc/baibot) and explore its documentation. Happy coding!