'Integrate, Matrix, AI, Baibot, Locally'

This commit is contained in:
Blog Creator 2025-06-30 04:04:35 +00:00
parent 24b5477e4f
commit 7e91a5d28f

View File

@ -1,69 +1,24 @@
# Matrix AI Integrations with baibot
# Matrix AI Integrations with baibot: A Personal Journey
I've been experimenting with **baibot** (<https://github.com/etkecc/baibot>), a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. This setup allows me to interact with LLMs directly within my own Matrix server, enhancing both personal and community communication.
**Introduction**
### Key Setup Steps
Hey there, fellow tech enthusiasts! Im thrilled to share my latest adventure in integrating Artificial Intelligence into my self-hosted Matrix server using **baibot**, a locally deployable bot for LLMs. This setup not only enhances privacy but also allows precise control over interactions. Lets dive into the details of how this works and what it means for my daily Matrix experiences.
1. **Configuration**:
- Use the sample provider config (e.g., <https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml>) to define LLM models, prompts, temperatures, and token limits.
2. **Kubernetes Deployment**:
- Deploy using a custom `Deployment.yaml` and PersistentVolumeClaim (PVC) for storage persistence.
- Example `Deployment.yaml`:
**The Setup: My Matrix Server**
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
template:
spec:
containers:
- env:
- name: BAIBOT_PERSISTENCE_DATA_DIR_PATH
value: /data
image: ghcr.io/etkecc/baibot:v1.7.4
volumeMounts:
- mountPath: /app/config.yml
subPath: config.yml
- mountPath: /data
subPath: data
```
Ive been running a self-hosted Matrix server for years, using it for both personal notifications (like package deliveries) and community chats with family and friends. Baibot integrates seamlessly here, ensuring all interactions stay within my network, enhancing security and control.
- PVC setup (`pvc-ridgway-bot.yaml`):
**Why baibot?**
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ridgway-bot-storage
spec:
storageClassName: longhorn
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
```
* **Local Control**: Full data sovereignty—no third-party servers touch sensitive information.
* **Flexibility**: Customizable bots per room using Elements interface.
* **OpenWebUI Integration**: Connects to my existing AI infrastructure for RAG (Retrieval-Augmented Generation) capabilities.
3. **Kubernetes Deployment Script**:
**Technical Breakdown**
```sh
kubectl delete namespace ridgway-bot
kubectl create namespace ridgway-bot
kubectl -n ridgway-bot create cm ridgway-bot --from-file=config.yml=./config.yml
kubectl apply -f pvc-ridgway-bot.yaml
kubectl apply -f Deployment.yaml
sleep 90 && kubectl cp data/* $(kubectl get pods -o custom-columns=":metadata.name" -n ridgway_bot | head -n1):/data
```
**1. Configuring the Bot**
4. **Post-Deployment**:
- Connect the bot to Matrix rooms via Elements admin interface.
- Fine-tune configurations (e.g., temperature, prompts) for specific rooms.
### Example Configurations
#### Ollama Integration:
Baibot uses a `config.yml` file to define parameters like the LLM model, prompt, and token limits. Heres an example configuration:
```yaml
base_url: http://192.168.178.45:11434/v1
@ -75,33 +30,99 @@ text_generation:
max_context_tokens: 128000
```
#### Openwebui Integration (RAG):
This config points to my local OpenWebUI server, ensuring all interactions remain private.
**2. Deploying to Kubernetes**
To run baibot alongside other services in my K8s cluster, I created a deployment YAML and Persistent Volume Claim (PVC) for data persistence:
**Deployment.yaml**
```yaml
base_url: https://ai.aridgwayweb.com/api/
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ridgway-bot
name: ridgway-bot
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
- image: ghcr.io/etkecc/baibot:v1.7.4
env:
- name: BAIBOT_PERSISTENCE_DATA_DIR_PATH
value: /data
volumeMounts:
- name: ridgway-bot-cm
mountPath: /app/config.yml
subPath: config.yml
- name: ridgway-bot-pv
mountPath: /data
volumes:
- name: ridgway-bot-cm
configMap:
name: ridgway-bot
persistentVolumeClaim:
claimName: ridgway-bot-storage
```
**PVC-ridgway-bot.yaml**
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ridgway-bot-storage
spec:
accessModes:
- ReadWriteMany
storageClassName: longhorn
resources:
requests:
storage: 500Mi
```
**3. Deployment Script**
A simple script handles the deployment process:
```bash
kubectl delete namespace ridgway-bot
kubectl create namespace ridgway-bot
kubectl -n ridgway-bot create configmap ridgway-bot --from-file=config.yml=./config.yml
kubectl apply -f pvc-ridgway-bot.yaml
kubectl apply -f deployment.yaml
sleep 90
kubectl cp data/* $(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n ridgway-bot | head -n 1):/data -n ridgway-bot
```
This ensures the bot starts correctly and persists data.
**Integration with OpenWebUI**
For rooms requiring RAG, baibot uses OpenWebUIs API:
```yaml
base_url: 'https://ai.aridgwayweb.com/api/'
api_key: <my-openwebui-api-key>
text_generation:
model_id: andrew-knowledge-base
prompt: 'Your name is Rodergast...'
temperature: 0.7
max_response_tokens: 4096
max_context_tokens: 128000
```
### Benefits of Local Deployment
This configuration pulls context from my local knowledge base, ensuring relevant responses.
- **Full Control**: Data privacy and compliance without third-party dependencies.
- **Scalability**: Kubernetes enables easy scaling as needed.
- **Flexibility**: Combine with services like openwebui for rich contextual responses.
**Challenges and Future Plans**
### Future Plans
While the setup works smoothly, hardware limitations are a concern. My current server is a 10-year-old machine struggling with AI demands. Upgrades are planned:
Next, I aim to integrate baibot with Home Assistant for alarm notifications. However, current hardware limitations (a 10-year-old PC) may necessitate a more powerful setup in the future.
* **Hardware Upgrade**: Transitioning to a more powerful server (e.g., my gaming PC).
* **Blade Server Exploration**: For scalable performance.
* **Future Blog Post**: Detailing the hardware architecture needed for modern AI workloads.
Stay tuned for updates!
**Conclusion**
# Conclusion
baibot enhances Matrix interactions by enabling direct LLM integration, offering seamless control over room-specific behaviors. Combining local deployment with RAG capabilities via openwebui demonstrates DIY tech stack potential.
Explore further and share your experiences! 🚀🤖
Baibot revolutionizes how I interact with AI on Matrix. Its not just about running bots; its about taking control of your data and interactions. Whether youre a Matrix server admin or interested in local AI integrations, this setup offers flexibility and security. Stay tuned for more updates!