Update with Human Interactions

This commit is contained in:
armistace 2025-06-30 17:17:46 +10:00
parent ddf9ccd9aa
commit ace2b3e98b

View File

@ -1,4 +1,20 @@
**Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs**
Title: Intergrating Ollama and Matrix with Baibot
Date: 2025-06-25 20:00
Modified: 2025-06-30 08:00
Category: AI, Data, Matrix
Tags: ai, kubernetes, matrix
Slug: ollama-matrix-integration
Authors: Andrew Ridgway
Summary: Integrating a Local LLM to a personal matrix server all the fun AND data sovereignty
### _Human Introduction_
I've been experimenting with AI and integrations I'm particuarly excited by the idea of using LLM's to integrate between different systems (Stay tuned for a blog [MCP](https://modelcontextprotocol.io/introduction) at some point in the future!)
Below I've thrown together some notes and had AI build a very quick how to on a cool little project that took next to no time to put together that I thought might be interesting for the group.. Enjoy!
# Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs
Alright, so Ive been messing around with this cool project called **baibot**, which is a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. If youre anything like me, you run your own Matrix server to keep things private and under control—whether its for family communication or interacting with the tech community. But one day, I thought, “Why not have my LLMs right where Im already managing everything else?” Enter baibot.
@ -7,7 +23,7 @@ Alright, so Ive been messing around with this cool project called **baibot**,
First off, Ive got a home Matrix server running Element. Integrating baibot into this environment makes sense because it allows me to connect directly via the same platform. The key was getting the configuration right using examples from [baibots GitHub](https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml). For instance, connecting to an Ollama gemma3 model with a specific prompt ensures its lighthearted yet responsive:
```yaml
base_url: http://192.168.178.45:11434/v1
base_url: http://<my_ollama_ip>:11434/v1
text_generation:
model_id: gemma3:latest
prompt: 'You are a lighthearted bot...'
@ -76,7 +92,7 @@ The deployment script handles namespace creation, config maps, PVCs, and waits f
Another cool aspect is integrating baibot with **OpenWebUI**, which acts as an OpenAI-compatible API. This allows me to leverage models Ive created in OpenWebUI that include knowledge bases (RAG). The config here uses OpenWebUIs endpoints:
```yaml
base_url: 'https://ai.aridgwayweb.com/api/'
base_url: 'https://<my-openwebui-endpoint>/api/'
api_key: <my-openwebui-api-key>
text_generation:
model_id: andrew-knowledge-base