From 24b5477e4fdf85dada7e0af5a91b1f332e7f3a5c Mon Sep 17 00:00:00 2001 From: Blog Creator Date: Fri, 27 Jun 2025 06:22:56 +0000 Subject: [PATCH 1/4] 'LLM Matrix bot integration deployed' --- .../matrix_ai_integrations_with_baibot.md | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 src/content/matrix_ai_integrations_with_baibot.md diff --git a/src/content/matrix_ai_integrations_with_baibot.md b/src/content/matrix_ai_integrations_with_baibot.md new file mode 100644 index 0000000..f3c90bd --- /dev/null +++ b/src/content/matrix_ai_integrations_with_baibot.md @@ -0,0 +1,107 @@ +# Matrix AI Integrations with baibot + +I've been experimenting with **baibot** (), a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. This setup allows me to interact with LLMs directly within my own Matrix server, enhancing both personal and community communication. + +### Key Setup Steps + +1. **Configuration**: + - Use the sample provider config (e.g., ) to define LLM models, prompts, temperatures, and token limits. +2. **Kubernetes Deployment**: + - Deploy using a custom `Deployment.yaml` and PersistentVolumeClaim (PVC) for storage persistence. + - Example `Deployment.yaml`: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + ... +spec: + template: + spec: + containers: + - env: + - name: BAIBOT_PERSISTENCE_DATA_DIR_PATH + value: /data + image: ghcr.io/etkecc/baibot:v1.7.4 + volumeMounts: + - mountPath: /app/config.yml + subPath: config.yml + - mountPath: /data + subPath: data +``` + + - PVC setup (`pvc-ridgway-bot.yaml`): + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: ridgway-bot-storage +spec: + storageClassName: longhorn + accessModes: + - ReadWriteMany + resources: + requests: + storage: 500Mi +``` + +3. **Kubernetes Deployment Script**: + +```sh +kubectl delete namespace ridgway-bot +kubectl create namespace ridgway-bot +kubectl -n ridgway-bot create cm ridgway-bot --from-file=config.yml=./config.yml +kubectl apply -f pvc-ridgway-bot.yaml +kubectl apply -f Deployment.yaml +sleep 90 && kubectl cp data/* $(kubectl get pods -o custom-columns=":metadata.name" -n ridgway_bot | head -n1):/data +``` + +4. **Post-Deployment**: + - Connect the bot to Matrix rooms via Element’s admin interface. + - Fine-tune configurations (e.g., temperature, prompts) for specific rooms. + +### Example Configurations + +#### Ollama Integration: + +```yaml +base_url: http://192.168.178.45:11434/v1 +text_generation: + model_id: gemma3:latest + prompt: 'You are a lighthearted bot...' + temperature: 0.9 + max_response_tokens: 4096 + max_context_tokens: 128000 +``` + +#### Openwebui Integration (RAG): + +```yaml +base_url: https://ai.aridgwayweb.com/api/ +api_key: +text_generation: + model_id: andrew-knowledge-base + prompt: 'Your name is Rodergast...' + temperature: 0.7 + max_response_tokens: 4096 + max_context_tokens: 128000 +``` + +### Benefits of Local Deployment + +- **Full Control**: Data privacy and compliance without third-party dependencies. +- **Scalability**: Kubernetes enables easy scaling as needed. +- **Flexibility**: Combine with services like openwebui for rich contextual responses. + +### Future Plans + +Next, I aim to integrate baibot with Home Assistant for alarm notifications. However, current hardware limitations (a 10-year-old PC) may necessitate a more powerful setup in the future. + +Stay tuned for updates! + +# Conclusion + +baibot enhances Matrix interactions by enabling direct LLM integration, offering seamless control over room-specific behaviors. Combining local deployment with RAG capabilities via openwebui demonstrates DIY tech stack potential. + +Explore further and share your experiences! 🚀🤖 -- 2.39.5 From 7e91a5d28fa27e643a2fd7df9f3e0a1db0b111cd Mon Sep 17 00:00:00 2001 From: Blog Creator Date: Mon, 30 Jun 2025 04:04:35 +0000 Subject: [PATCH 2/4] 'Integrate, Matrix, AI, Baibot, Locally' --- .../matrix_ai_integrations_with_baibot.md | 171 ++++++++++-------- 1 file changed, 96 insertions(+), 75 deletions(-) diff --git a/src/content/matrix_ai_integrations_with_baibot.md b/src/content/matrix_ai_integrations_with_baibot.md index f3c90bd..c3bd777 100644 --- a/src/content/matrix_ai_integrations_with_baibot.md +++ b/src/content/matrix_ai_integrations_with_baibot.md @@ -1,69 +1,24 @@ -# Matrix AI Integrations with baibot +# Matrix AI Integrations with baibot: A Personal Journey -I've been experimenting with **baibot** (), a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. This setup allows me to interact with LLMs directly within my own Matrix server, enhancing both personal and community communication. +**Introduction** -### Key Setup Steps +Hey there, fellow tech enthusiasts! I’m thrilled to share my latest adventure in integrating Artificial Intelligence into my self-hosted Matrix server using **baibot**, a locally deployable bot for LLMs. This setup not only enhances privacy but also allows precise control over interactions. Let’s dive into the details of how this works and what it means for my daily Matrix experiences. -1. **Configuration**: - - Use the sample provider config (e.g., ) to define LLM models, prompts, temperatures, and token limits. -2. **Kubernetes Deployment**: - - Deploy using a custom `Deployment.yaml` and PersistentVolumeClaim (PVC) for storage persistence. - - Example `Deployment.yaml`: +**The Setup: My Matrix Server** -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - ... -spec: - template: - spec: - containers: - - env: - - name: BAIBOT_PERSISTENCE_DATA_DIR_PATH - value: /data - image: ghcr.io/etkecc/baibot:v1.7.4 - volumeMounts: - - mountPath: /app/config.yml - subPath: config.yml - - mountPath: /data - subPath: data -``` +I’ve been running a self-hosted Matrix server for years, using it for both personal notifications (like package deliveries) and community chats with family and friends. Baibot integrates seamlessly here, ensuring all interactions stay within my network, enhancing security and control. - - PVC setup (`pvc-ridgway-bot.yaml`): +**Why baibot?** -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: ridgway-bot-storage -spec: - storageClassName: longhorn - accessModes: - - ReadWriteMany - resources: - requests: - storage: 500Mi -``` +* **Local Control**: Full data sovereignty—no third-party servers touch sensitive information. +* **Flexibility**: Customizable bots per room using Element’s interface. +* **OpenWebUI Integration**: Connects to my existing AI infrastructure for RAG (Retrieval-Augmented Generation) capabilities. -3. **Kubernetes Deployment Script**: +**Technical Breakdown** -```sh -kubectl delete namespace ridgway-bot -kubectl create namespace ridgway-bot -kubectl -n ridgway-bot create cm ridgway-bot --from-file=config.yml=./config.yml -kubectl apply -f pvc-ridgway-bot.yaml -kubectl apply -f Deployment.yaml -sleep 90 && kubectl cp data/* $(kubectl get pods -o custom-columns=":metadata.name" -n ridgway_bot | head -n1):/data -``` +**1. Configuring the Bot** -4. **Post-Deployment**: - - Connect the bot to Matrix rooms via Element’s admin interface. - - Fine-tune configurations (e.g., temperature, prompts) for specific rooms. - -### Example Configurations - -#### Ollama Integration: +Baibot uses a `config.yml` file to define parameters like the LLM model, prompt, and token limits. Here’s an example configuration: ```yaml base_url: http://192.168.178.45:11434/v1 @@ -75,33 +30,99 @@ text_generation: max_context_tokens: 128000 ``` -#### Openwebui Integration (RAG): +This config points to my local OpenWebUI server, ensuring all interactions remain private. + +**2. Deploying to Kubernetes** + +To run baibot alongside other services in my K8s cluster, I created a deployment YAML and Persistent Volume Claim (PVC) for data persistence: + +**Deployment.yaml** ```yaml -base_url: https://ai.aridgwayweb.com/api/ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: ridgway-bot +name: ridgway-bot +spec: + replicas: 1 + strategy: + type: Recreate + template: + spec: + containers: + - image: ghcr.io/etkecc/baibot:v1.7.4 + env: + - name: BAIBOT_PERSISTENCE_DATA_DIR_PATH + value: /data + volumeMounts: + - name: ridgway-bot-cm + mountPath: /app/config.yml + subPath: config.yml + - name: ridgway-bot-pv + mountPath: /data + volumes: + - name: ridgway-bot-cm + configMap: + name: ridgway-bot + persistentVolumeClaim: + claimName: ridgway-bot-storage +``` + +**PVC-ridgway-bot.yaml** + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: ridgway-bot-storage +spec: + accessModes: + - ReadWriteMany + storageClassName: longhorn + resources: + requests: + storage: 500Mi +``` + +**3. Deployment Script** + +A simple script handles the deployment process: + +```bash +kubectl delete namespace ridgway-bot +kubectl create namespace ridgway-bot +kubectl -n ridgway-bot create configmap ridgway-bot --from-file=config.yml=./config.yml +kubectl apply -f pvc-ridgway-bot.yaml +kubectl apply -f deployment.yaml +sleep 90 +kubectl cp data/* $(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n ridgway-bot | head -n 1):/data -n ridgway-bot +``` + +This ensures the bot starts correctly and persists data. + +**Integration with OpenWebUI** + +For rooms requiring RAG, baibot uses OpenWebUI’s API: + +```yaml +base_url: 'https://ai.aridgwayweb.com/api/' api_key: text_generation: model_id: andrew-knowledge-base - prompt: 'Your name is Rodergast...' - temperature: 0.7 - max_response_tokens: 4096 - max_context_tokens: 128000 ``` -### Benefits of Local Deployment +This configuration pulls context from my local knowledge base, ensuring relevant responses. -- **Full Control**: Data privacy and compliance without third-party dependencies. -- **Scalability**: Kubernetes enables easy scaling as needed. -- **Flexibility**: Combine with services like openwebui for rich contextual responses. +**Challenges and Future Plans** -### Future Plans +While the setup works smoothly, hardware limitations are a concern. My current server is a 10-year-old machine struggling with AI demands. Upgrades are planned: -Next, I aim to integrate baibot with Home Assistant for alarm notifications. However, current hardware limitations (a 10-year-old PC) may necessitate a more powerful setup in the future. +* **Hardware Upgrade**: Transitioning to a more powerful server (e.g., my gaming PC). +* **Blade Server Exploration**: For scalable performance. +* **Future Blog Post**: Detailing the hardware architecture needed for modern AI workloads. -Stay tuned for updates! +**Conclusion** -# Conclusion - -baibot enhances Matrix interactions by enabling direct LLM integration, offering seamless control over room-specific behaviors. Combining local deployment with RAG capabilities via openwebui demonstrates DIY tech stack potential. - -Explore further and share your experiences! 🚀🤖 +Baibot revolutionizes how I interact with AI on Matrix. It’s not just about running bots; it’s about taking control of your data and interactions. Whether you’re a Matrix server admin or interested in local AI integrations, this setup offers flexibility and security. Stay tuned for more updates! -- 2.39.5 From ddf9ccd9aa538d0473e802a0cc4bc7ec83bd5283 Mon Sep 17 00:00:00 2001 From: Blog Creator Date: Mon, 30 Jun 2025 07:00:05 +0000 Subject: [PATCH 3/4] 'LLMs, Matrix, Baibot, Home Automation, Fun' --- .../matrix_ai_integrations_with_baibot.md | 88 ++++++------------- 1 file changed, 29 insertions(+), 59 deletions(-) diff --git a/src/content/matrix_ai_integrations_with_baibot.md b/src/content/matrix_ai_integrations_with_baibot.md index c3bd777..3067f4a 100644 --- a/src/content/matrix_ai_integrations_with_baibot.md +++ b/src/content/matrix_ai_integrations_with_baibot.md @@ -1,42 +1,28 @@ -# Matrix AI Integrations with baibot: A Personal Journey +**Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs** -**Introduction** +Alright, so I’ve been messing around with this cool project called **baibot**, which is a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. If you’re anything like me, you run your own Matrix server to keep things private and under control—whether it’s for family communication or interacting with the tech community. But one day, I thought, “Why not have my LLMs right where I’m already managing everything else?” Enter baibot. -Hey there, fellow tech enthusiasts! I’m thrilled to share my latest adventure in integrating Artificial Intelligence into my self-hosted Matrix server using **baibot**, a locally deployable bot for LLMs. This setup not only enhances privacy but also allows precise control over interactions. Let’s dive into the details of how this works and what it means for my daily Matrix experiences. +**Setting Up My Own Matrix Server with baibot** -**The Setup: My Matrix Server** - -I’ve been running a self-hosted Matrix server for years, using it for both personal notifications (like package deliveries) and community chats with family and friends. Baibot integrates seamlessly here, ensuring all interactions stay within my network, enhancing security and control. - -**Why baibot?** - -* **Local Control**: Full data sovereignty—no third-party servers touch sensitive information. -* **Flexibility**: Customizable bots per room using Element’s interface. -* **OpenWebUI Integration**: Connects to my existing AI infrastructure for RAG (Retrieval-Augmented Generation) capabilities. - -**Technical Breakdown** - -**1. Configuring the Bot** - -Baibot uses a `config.yml` file to define parameters like the LLM model, prompt, and token limits. Here’s an example configuration: +First off, I’ve got a home Matrix server running Element. Integrating baibot into this environment makes sense because it allows me to connect directly via the same platform. The key was getting the configuration right using examples from [baibot’s GitHub](https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml). For instance, connecting to an Ollama gemma3 model with a specific prompt ensures it’s lighthearted yet responsive: ```yaml base_url: http://192.168.178.45:11434/v1 text_generation: model_id: gemma3:latest prompt: 'You are a lighthearted bot...' - temperature: 0.9 - max_response_tokens: 4096 - max_context_tokens: 128000 +temperature: 0.9 +max_response_tokens: 4096 +max_context_tokens: 128000 ``` -This config points to my local OpenWebUI server, ensuring all interactions remain private. +This gives me precise control over the bot’s behavior, ensuring each instance in Matrix rooms behaves exactly as intended. -**2. Deploying to Kubernetes** +**Deploying to Kubernetes** -To run baibot alongside other services in my K8s cluster, I created a deployment YAML and Persistent Volume Claim (PVC) for data persistence: +To ensure reliability, I used Kubernetes. Here's a breakdown of the key files: -**Deployment.yaml** +* **Deployment.yaml**: Manages pod replicas, security contexts, and volume mounts for persistence. ```yaml apiVersion: apps/v1 @@ -44,7 +30,7 @@ kind: Deployment metadata: labels: app: ridgway-bot -name: ridgway-bot + name: ridgway-bot spec: replicas: 1 strategy: @@ -53,24 +39,22 @@ spec: spec: containers: - image: ghcr.io/etkecc/baibot:v1.7.4 - env: - - name: BAIBOT_PERSISTENCE_DATA_DIR_PATH - value: /data + name: baibot volumeMounts: - name: ridgway-bot-cm mountPath: /app/config.yml - subPath: config.yml - name: ridgway-bot-pv mountPath: /data volumes: - name: ridgway-bot-cm configMap: name: ridgway-bot - persistentVolumeClaim: - claimName: ridgway-bot-storage + - name: ridgway-bot-pv + persistentVolumeClaim: + claimName: ridgway-bot-storage ``` -**PVC-ridgway-bot.yaml** +* **Persistent Volume Claim (PVC)** ensures data storage for baibot. ```yaml apiVersion: v1 @@ -80,49 +64,35 @@ metadata: spec: accessModes: - ReadWriteMany - storageClassName: longhorn resources: requests: storage: 500Mi ``` -**3. Deployment Script** +The deployment script handles namespace creation, config maps, PVCs, and waits for the pod to be ready before copying data. -A simple script handles the deployment process: +**Integrating with OpenWebUI for RAG** -```bash -kubectl delete namespace ridgway-bot -kubectl create namespace ridgway-bot -kubectl -n ridgway-bot create configmap ridgway-bot --from-file=config.yml=./config.yml -kubectl apply -f pvc-ridgway-bot.yaml -kubectl apply -f deployment.yaml -sleep 90 -kubectl cp data/* $(kubectl get pods --no-headers -o custom-columns=":metadata.name" -n ridgway-bot | head -n 1):/data -n ridgway-bot -``` - -This ensures the bot starts correctly and persists data. - -**Integration with OpenWebUI** - -For rooms requiring RAG, baibot uses OpenWebUI’s API: +Another cool aspect is integrating baibot with **OpenWebUI**, which acts as an OpenAI-compatible API. This allows me to leverage models I’ve created in OpenWebUI that include knowledge bases (RAG). The config here uses OpenWebUI’s endpoints: ```yaml base_url: 'https://ai.aridgwayweb.com/api/' api_key: text_generation: model_id: andrew-knowledge-base + prompt: 'Your name is Rodergast...' ``` -This configuration pulls context from my local knowledge base, ensuring relevant responses. +This setup lets me access RAG capabilities directly within Matrix chats, all without writing a single line of code. It’s like having my very own AI research assistant right there in the chatroom. -**Challenges and Future Plans** +**Future Steps and Challenges** -While the setup works smoothly, hardware limitations are a concern. My current server is a 10-year-old machine struggling with AI demands. Upgrades are planned: - -* **Hardware Upgrade**: Transitioning to a more powerful server (e.g., my gaming PC). -* **Blade Server Exploration**: For scalable performance. -* **Future Blog Post**: Detailing the hardware architecture needed for modern AI workloads. +Now that baibot is up and running, I’m already thinking about expanding its use cases. The next step might be integrating it with **Home Assistant** for alarm notifications or other automation tasks. However, my current setup uses an older gaming PC, which struggles with computational demands. This could lead to a rearchitecting effort—perhaps moving to a dedicated server or optimizing the hardware. **Conclusion** -Baibot revolutionizes how I interact with AI on Matrix. It’s not just about running bots; it’s about taking control of your data and interactions. Whether you’re a Matrix server admin or interested in local AI integrations, this setup offers flexibility and security. Stay tuned for more updates! +Baibot has been a fantastic tool for experimenting with AI integrations in Matrix. By leveraging existing infrastructure and OpenWebUI’s capabilities, I’ve achieved full control over data privacy and customization. The next frontier is expanding these integrations into more practical applications like home automation. Stay tuned for updates! + +**Final Thoughts** + +It’s incredibly rewarding to see how open-source projects like baibot democratize AI access. Whether you’re a hobbyist or a pro, having tools that let you run LLMs locally without vendor lock-in is game-changing. If you’re interested in diving deeper, check out the [baibot GitHub](https://github.com/etkecc/baibot) and explore its documentation. Happy coding! \ No newline at end of file -- 2.39.5 From ace2b3e98bea886f4430888019bc442049e7a1de Mon Sep 17 00:00:00 2001 From: armistace Date: Mon, 30 Jun 2025 17:17:46 +1000 Subject: [PATCH 4/4] Update with Human Interactions --- .../matrix_ai_integrations_with_baibot.md | 22 ++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/src/content/matrix_ai_integrations_with_baibot.md b/src/content/matrix_ai_integrations_with_baibot.md index 3067f4a..63ca4a5 100644 --- a/src/content/matrix_ai_integrations_with_baibot.md +++ b/src/content/matrix_ai_integrations_with_baibot.md @@ -1,4 +1,20 @@ -**Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs** +Title: Intergrating Ollama and Matrix with Baibot +Date: 2025-06-25 20:00 +Modified: 2025-06-30 08:00 +Category: AI, Data, Matrix +Tags: ai, kubernetes, matrix +Slug: ollama-matrix-integration +Authors: Andrew Ridgway +Summary: Integrating a Local LLM to a personal matrix server all the fun AND data sovereignty + +### _Human Introduction_ +I've been experimenting with AI and integrations I'm particuarly excited by the idea of using LLM's to integrate between different systems (Stay tuned for a blog [MCP](https://modelcontextprotocol.io/introduction) at some point in the future!) + +Below I've thrown together some notes and had AI build a very quick how to on a cool little project that took next to no time to put together that I thought might be interesting for the group.. Enjoy! + + + +# Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs Alright, so I’ve been messing around with this cool project called **baibot**, which is a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. If you’re anything like me, you run your own Matrix server to keep things private and under control—whether it’s for family communication or interacting with the tech community. But one day, I thought, “Why not have my LLMs right where I’m already managing everything else?” Enter baibot. @@ -7,7 +23,7 @@ Alright, so I’ve been messing around with this cool project called **baibot**, First off, I’ve got a home Matrix server running Element. Integrating baibot into this environment makes sense because it allows me to connect directly via the same platform. The key was getting the configuration right using examples from [baibot’s GitHub](https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml). For instance, connecting to an Ollama gemma3 model with a specific prompt ensures it’s lighthearted yet responsive: ```yaml -base_url: http://192.168.178.45:11434/v1 +base_url: http://:11434/v1 text_generation: model_id: gemma3:latest prompt: 'You are a lighthearted bot...' @@ -76,7 +92,7 @@ The deployment script handles namespace creation, config maps, PVCs, and waits f Another cool aspect is integrating baibot with **OpenWebUI**, which acts as an OpenAI-compatible API. This allows me to leverage models I’ve created in OpenWebUI that include knowledge bases (RAG). The config here uses OpenWebUI’s endpoints: ```yaml -base_url: 'https://ai.aridgwayweb.com/api/' +base_url: 'https:///api/' api_key: text_generation: model_id: andrew-knowledge-base -- 2.39.5