Compare commits

..

No commits in common. "master" and "fix-socials" have entirely different histories.

13 changed files with 39 additions and 931 deletions

View File

@ -1,61 +1,42 @@
name: Build and Push Image name: Build and Push Image
on: on: [ push ]
push:
branches:
- master
jobs: jobs:
build: build:
name: Build and push image name: Build and push image
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-latest container: catthehacker/ubuntu:act-latest
if: gitea.ref == 'refs/heads/master' if: gitea.ref == 'refs/heads/master'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Create Kubeconfig - name: Create Kubeconfig
run: | run: |
mkdir $HOME/.kube mkdir $HOME/.kube
echo "${{ secrets.KUBEC_CONFIG_BUILDX_NEW }}" > $HOME/.kube/config echo "${{ secrets.KUBEC_CONFIG_BUILDX }}" > $HOME/.kube/config
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
driver: kubernetes driver: kubernetes
driver-opts: | driver-opts: |
namespace=gitea-runner namespace=gitea-runner
qemu.install=true qemu.install=true
- name: Login to Docker Registry - name: Login to Docker Registry
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
registry: git.aridgwayweb.com registry: git.aridgwayweb.com
username: armistace username: armistace
password: ${{ secrets.REG_PASSWORD }} password: ${{ secrets.REG_PASSWORD }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
push: true push: true
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
tags: | tags: |
git.aridgwayweb.com/armistace/blog:latest git.aridgwayweb.com/armistace/blog:latest
- name: Deploy
run: |
echo "Installing Kubectl"
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
chmod 644 /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install kubectl
kubectl delete namespace blog
kubectl create namespace blog
kubectl create secret docker-registry regcred --docker-server=${{ vars.DOCKER_SERVER }} --docker-username=${{ vars.DOCKER_USERNAME }} --docker-password='${{ secrets.DOCKER_PASSWORD }}' --docker-email=${{ vars.DOCKER_EMAIL }} --namespace=blog
kubectl apply -f kube/blog_pod.yaml && kubectl apply -f kube/blog_deployment.yaml && kubectl apply -f kube/blog_service.yaml

View File

@ -1,24 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-deployment
labels:
app: blog
namespace: blog
spec:
replicas: 3
selector:
matchLabels:
app: blog
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: git.aridgwayweb.com/armistace/blog:latest
ports:
- containerPort: 8000
imagePullSecrets:
- name: regcred

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: blog
namespace: blog
spec:
containers:
- name: blog
image: git.aridgwayweb.com/armistace/blog:latest
ports:
- containerPort: 8000
imagePullSecrets:
- name: regcred

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: blog-service
namespace: blog
spec:
type: NodePort
selector:
app: blog
ports:
- port: 80
targetPort: 8000
nodePort: 30009

View File

@ -1,41 +0,0 @@
Title: Apple And The Anti-Dev Platform
Date: 2025-08-28 20:00
Modified: 2025-08-28 20:00
Category: Tech, Software, Apple
Tags: Tech, Software, Apple
Slug: apple-anti-dev
Authors: Andrew Ridgway
Summary: Apples requirements for developers are onerous, I detail some of the frustrations I've had whilst dealing with the platform to deploy a small app as part of my day job
## Introduction: Why I Hate Loving to Hate Apple
This week, I found myself in the unenviable position of using MacOS for work. It was like revisiting an old flame only to realize theyve become *that* person—still attractive from afar, but toxic up close. Let me clarify: Im not anti-Apple per se. I appreciate their design aesthetic as much as anyone. But when youre a developer, especially one with a penchant for Linux and a deep love for open-source, Apples ecosystem feels like walking into a store where the sign says "Employee Discounts" but they charge you double for the privilege.
## 1. The Hardware-Software Tie-In: Why Buy New Every Year?
Lets talk about my borrowed MacBook from 2020. It was a kind gesture, right? But heres the kicker: this machine, which was cutting-edge just five years ago, is now deemed too old to run the latest MacOS. I needed Xcode for a project, and guess what? You cant run the latest version of Xcode without the latest MacOS. So, to paraphrase: "Sorry, but your device isnt *new enough* to develop on the Apple platform anymore." This isnt just inconvenient; its a deliberate strategy to force upgrades. Its like buying a car that requires you to upgrade your entire garage every year just to keep it running.
## 2. Forced Obsolescence: The New "Upgrade" Cycle
Yes, Microsoft did the whole TPM 2.0 thing with Windows 11. But Apple takes it to another level. Theyve turned hardware into a subscription model without you even realizing it. You buy a device, and within a few years, its obsolete for their latest software and tools. This isnt about security or innovation—its about control. Why release an operating system that only works on devices sold in the last 12 months? It creates a false market for "new" hardware, padding Apples margins at the expense of developers and users.
## 3. High Costs: The Developer Fee That Keeps On Giving
I honestly believe this actually boils down to money? To develop on Apples platform, you need an Apple Developer account. This costs $150 AUD a year. Now, if I were to buy a new MacBook Pro today, that would set me back around $2,500 AUD. And for what? The privilege of being able to build apps on my own device? Its like paying a toll every year just to use the road you already own. Its enough to make you consider a career change and become a sheep farmer.
## 4. Lack of Freedom: Who Owns the Device Anyway?
Heres where it gets really egregious: Apples developer review process. Its like being subjected to a TSA pat-down every time you want to build something, even if it's just for your own device. To deploy ANYTHING onto an IOS device I need to hand my Government issued license over to Apple and let them "check I'm a real person". And no this isn't just for the app store deployments, which I can understand. This is for any deployment, it's the only way to get a certificate to cross sign on the app and device... Google might be heading down a similar path, but at least you'll be able to on custom Android ROmS. On Apple, it feels like every step is designed to remind you that youre dancing in their sandbox—and they call the shots. If you use IOS you have to dance to their tune AT ALL TIMES.
## 5. The "Apple Tax": A Future Job Requirement
I think all developers and consultants should demand an "Apple Tax." It will be simple:
* $5,000 AUD for new Apple hardware.
* An additional 25% markup on development hours spent navigating Apples ecosystem.
Why? Because it's time developers passed on these costs to the users. It's time to make this hurt the consumers who insist on using these products with predatory business models for developers. Yes, developers go where the market is, but it's time to start charging that market so it understands the true cost to be there.
## Conclusion: Why Ill Keep Hating Loving to Hate Apple
Apples ecosystem feels like a love story gone wrong—a relationship where one party keeps raising the stakes just to remind you of how much they control everything. Developers are supposed to be the disruptors, the rebels who challenge the status quo. But when your tools are designed to keep you tethered to a specific platform and its outdated business model, it feels less like innovation and more like indentured servitude. If youre still enamored with Apples ecosystem and think its “just part of the game,” I urge you to take a long, hard look in the mirror. Because if this is your idea of progress, were all in trouble.

View File

@ -1,260 +0,0 @@
Title: The Start Of An AI Blogger
Date: 2025-01-21 20:00
Modified: 2024-01-25 18:00
Category: AI, Data
Tags: ai, python
Slug: ai-blog-begginning
Authors: Andrew Ridgway
Summary: What if I had AI write the content from the summary?
#### Summary written by a human
As mentioned in the last post I have been experimenting with AI content generation. Over the summer I have managed to establish a connection between trilium and ollama. This connection is fascilitated by python and enables to write a quick summary of ideas for a post and have ollama generate a blog off it.
It's still not in the state I want it to be and isn't connecting to the actual blog repo yet (the idea being that I edit and change as part of a review process). But it is generating stuff that I thought would be worth a share just for fun. The eventual idea is the container will be something I trigger as part of a CI/CD process when the equipment is up and running but the final "production" implementation is still being fleshed out in my head (if you can call a homelab project "production")
The focus to this point has been on prompt engineering and model selection. A big part of this is that it needs to be able to run completely independently of any cloud services so no Chat GPT.
The obvious solution is [ollama](https://ollama.com) I'm lucky enough to have a modest secondary gaming rig in my living room with an nvidia 2060 in it that can act as a modest AI server so I have set it up there.
This server has allowed me to experiment, almost at will, with models. a few I tried included
- llama3.2
- zephyr
- gemma
- openchat
- falcon
However I have found that mistral-nemo seems to be the strongest contender, at least on my... modest... hardware.
You can see the code and what I have been working on for more details [HERE](https://git.aridgwayweb.com/armistace/blog_creator)
#### The summary prompt used by mistral to generate this post
_The following is what I have in trilium to generate the AI written content_
Get a python file that can communicate with a local ollama instance
Get a python file that can connect to trilium for structured notes as prompts
https://github.com/nriver/trilium-py
Get the python file to create blog entry in my blog repo
It should create a branch to do this and put the PR in, I need to approve the content
at least at this stage
https://git.aridgwayweb.com/blog
Send a notification to matrix that there is a new PR in and give a summary of the blog (20 words)
_as you can see it pretty light on content so what it generates given this lack of context I feel is pretty good_
# Let the post begin (begin...begin...begin...)
_What follows is what it generated.. yes it is hilarious but well... it was too good not to share_
# G'day, mates! Let's make an Ollama Blog Writer!
Ever felt like you've got more ideas than time to write 'em up? Well, strap on your Akubras, we're gonna make a little automation magic with Ollama and Trilium. Buckle up, Sheila!
## Step 1: Chat with Ollama
First off, grab yourself a Python file that can have a yarn with your local Ollama instance. It's like giving it a mate to chat with while you kick back with a VB.
```python
# olly.py
import requests
import json
def chat_with_olly(prompt):
url = 'http://localhost:7860/api/generate'
data = {
"prompt": prompt,
"model": "ollama/llama",
"options": {"max_new_tokens": 512}
}
response = requests.post(url, json=data)
return response.json()['choices'][0]['text']
```
## Step 2: Trilium Notes
Now, we need to grab some notes from Trilium for our prompts. Thankfully, there's a Python lib for that too.
```python
# trilium.py
import requests
def get_notes():
url = 'https://your-trilium-instance.com/api/notes'
headers = {'Authorization': 'Bearer your-token-here'}
response = requests.get(url, headers=headers)
return response.json()
```
## Step 3: Blog Entry Creation
Alright, now we're gonna make that Python file create a blog entry in our repo. It'll whip up a branch, put the PR in, and you can give it the tick of approval.
```python
# blog.py
import os
import subprocess
def create_blog_entry(title, content):
# Create a new branch for the blog post
branch_name = f"new-blog-{title}"
subprocess.run(["git", "checkout", "-b", branch_name])
# Create the markdown file with the title and content
with open(f"{branch_name}.md", "w") as f:
f.write(f"# {title}\n\n{content}")
# Add, commit, and push the changes to create a PR
subprocess.run(["git", "add", "-A"])
subprocess.run(["git", "commit", "-m", f"Add new blog entry: {title}"])
subprocess.run(["git", "push", "origin", branch_name])
```
## Step 4: Matrix Notification
Lastly, we want to send a message to our Matrix room letting us know there's a new PR waiting for approval. Here's how you can do it:
```python
# matrix.py
import requests
def send_matrix_message(room_id, message):
url = f'http://matrix-your-instance.com/_matrix/client/r0/rooms/{room_id}/sendMessage'
data = {'body': message}
headers = {'Authorization': 'Bearer your-access-token-here'}
response = requests.post(url, json=data, headers=headers)
```
And that's a wrap! Now you've got yourself an Ollama Blog Writer. Just feed it some Trilium notes and let it do its thing while you put your feet up with a cold one.
Cheers, and happy coding! 🍺🐍🤖
#### In Summary... again written by a human
So... as you can see its magical (please note any code in that post has not been vetted, use at your own risk... its 'raw' ai content). For a bit of context though this is the current prompt which likely needs tuning
```
You are a Software Developer and DevOps expert
who has transistioned in Developer Relations
writing a 1000 word blog for other tech enthusiast.
You like to use almost no code examples and prefer to talk
in a light comedic tone. You are also Australian
As this person write this blog as a markdown document.
The title for the blog is {self.title}.
Do not output the title in the markdown.
The basis for the content of the blog is:
{self.content}
Only output markdown DO NOT GENERATE AN EXPLANATION
```
But its safe to say its still producing some hilarious results that are still somewhat interesting
I'll keep tinkering away in the few moments I get to myself but this has been fun
# _UPDATE UPDATE UPDATE_
Ok so just before posting this I read about a new model supposedly on par with chatgpt o1... enter [Deepseek-r1](https://ollama.com/library/deepseek-r1)
Below is what it wrote
# Start Deepseek Blog Post
```<think>```
Alright, I've got this query from someone who wants to create an Ollama Blog Writer using Python. Let me break down what they're asking for.
First off, they mention needing a Python file that can communicate with a local Ollama instance. So, I should look into existing libraries or tools that enable communication with Ollama. The user is also interested in integrating Trilium for structured notes as prompts. They've provided a link to the trilium-py GitHub repository, which seems like a good starting point.
Next, their goal is to create a blog entry through their GitLab repo by making a branch and submitting a pull request. They want the PR content approved before proceeding further. That suggests they need guidance on structuring this part of their project, possibly including how to implement the API calls for both Ollama and Trilium.
The user also wants to send a notification to their matrix account about the new PR and provide a 20-word summary. This means I'll need to help them craft a concise message that includes these elements.
Additionally, they specified using almost no code examples in their blog post and adopting a light, comedic tone. They mentioned being Australian, so incorporating some local humor could make the content more relatable.
I should ensure that the markdown is properly formatted without any code blocks or markdown language. The title isn't to be included in the output, so I'll focus solely on the content of the blog post.
Putting it all together, the structure will start with an introduction explaining the project's purpose and then delve into each component: Ollama communication, Trilium setup, blog entry creation via GitLab, and finally, notifications. Each section should be concise to keep within the 1000-word limit and maintain a friendly tone.
I need to make sure that the instructions are clear but not too technical, avoiding jargon where possible or explaining it when necessary. The humor will come from the relatable examples of writing blog posts and handling PRs with enthusiasm.
```</think>```
# Creating an Ollama Blog Writer
Alright, fellow tech enthusiasts! Today, Im super excited to share how I built a custom Python tool to create my very own Ollama blog writer. Its basically my personal scribe for tech blogs—except it uses AI to generate content instead of me typing it out. Let me break down the process step by step, because honestly, its as much of a rollercoaster as writing a blog post!
## Step 1: Communicating with Ollama
First things first, I needed to connect my Python script to a running Ollama instance. Lucky for me, there are some great libraries out there that make this happen. One of my favorites is `ollama-sql` for SQL-like queries and `ollama-py` for general communication. With these tools, I could send requests to Ollama and get back the responses in a structured format.
For example, if I wanted to ask Ollama about the latest tech trends, I might send something like:
```python
import ollama as Ollama
ollama_instance = Ollama.init()
response = ollama_instance.query("What are the top AI developments this year?")
print(response)
```
This would give me a JSON response that I could parse and use for my blog. Easy peasy!
## Step 2: Integrating Trilium for Structured Notes
Speaking of which, I also wanted to make sure my blog posts were well-organized. Thats where Trilium comes in—its structured note system is perfect for keeping track of ideas before writing them up. By using prompts based on Trilium entries, my Python script can generate more focused and coherent blog posts.
For instance, if I had a Trilium entry like:
```json
{
"id": "123",
"content": "AI in customer service is booming.",
"type": "thought"
}
```
I could use that as a prompt to generate something like:
*"In the rapidly evolving landscape of AI applications, customer service has taken a quantum leap with AI-powered platforms...."*
Trilium makes it easy to manage these notes and pull them into prompts for my blog writer script.
## Step 3: Creating Blog Entries in My GitLab Repo
Now, heres where things get interesting (and slightly nerve-wracking). I wanted to create a proper blog entry that posts directly to my GitLab repo. So, I forked the [aridgwayweb/blog](https://git.aridgwayweb.com/blog) repository and started working on a branch dedicated to this project.
In my `create_blog_entry.py` script, I used GitLabs API to create a new entry. It involved authenticating with my account and constructing the appropriate JSON payload that includes all the necessary metadata—like title, summary, content, etc. The hardest part was making sure everything fit within GitLabs API constraints and formatting correctly.
Heres an excerpt of what I sent:
```python
import gitlab
gl = gitlab.Gitlab('gitlab.com', 'your_api_key')
entry = gl.entries.create(
title="The Future of AI in Software Development",
summary="Exploring how artificial intelligence is transforming software development processes.",
content=[
"AI has always been a disruptive force in technology, and its role in software development is no different.",
"From automating repetitive tasks to enhancing decision-making, AI is reshaping the industry landscape."
]
)
```
And then I notified myself that it was done!
## Step 4: Sending Notifications via Matrix
Finally, after everything was up and running, I sent a quick notification to my matrix account about the new pull request. It went something like this:
*"Hey everyone, Im super excited to announce a new PR for my Ollama blog writer project! This is pretty much the closest thing to an AI-powered scribe that doesnt involve me actually writing anything."*
Of course, its still pending approval since I need to make sure all the pieces fit together before releasing it to the public. But hey, at least Ive got a solid foundation to build on!
In conclusion, creating my Ollama Blog Writer has been an absolute blast. It combines my love for tech with Python and AI in ways I never imagined. Now, if only I could find a way to automate writing blog *reviews*
# End blog post
Now I'm not sure what the ```<think>``` tags are all about but this is a very interesting model and has produced the closest thing to what the prompt Asked for in my opinion
Deepseek-r1 is definitely one to keep an eye on and one I'll be using in the future
Until next time (But will the content come from me?)!

View File

@ -1,87 +0,0 @@
Title: GPT OSS - Is It Embrace, Extend, Extenguish
Date: 2025-08-12 20:00
Modified: 2025-08-14 20:00
Category: Politics, Tech, AI
Tags: politics, tech, Ai
Slug: gpt-oss-eee
Authors: Andrew Ridgway
Summary: GPT OSS is here from Open AI, the first open weight model from them since GPT-2. My question is... why now?
# Human Introduction
This has been a tough one for the publishing house to get right. I've had it generate 3 different drafts and this is still the result of quite the edit. Today's blog was written by:
1. Gemma:27b - Editor
2. GPT-OSS - Journalist
3. Qwen3:14b - Journalist
4. phi4:latest - Journalist
5. deepseek-r1:14b - journalist
The big change from last time is the addition of gpt-oss, which is of course the focus of hte topic today. It's quite the open weight model, haven't played with the tooling yet but I'm exceited to see what it can do, even if I do have questions.
Anyways, without further ado! GPT-OSS is it EEE? written by AI... For AI?
# GPT OSS - Is It EEE?
## Introduction: The Return of OpenAI (With Some Questions)
This week, the AI world got a bit busier than usual. OpenAI dropped their [**GPT-OSS**](https://openai.com/index/introducing-gpt-oss/) models, and it feels like theyre trying to make up for lost time—or maybe just remind everyone that theyre still in the game. The release has sparked a lot of excitement, but also some confusion. Are these models really as good as they claim? And why now? Lets break this down with all the drama, intrigue, and a dash of humor youve come to expect from your friendly neighborhood tech writer.
## What Exactly Is GPT-OSS Anyway?
OpenAI has thrown two models into the ring:
1. **GPT-oss-120b**: A hefty 120 billion parameter model that theyre claiming can “hold its own” against their own **o4-mini** (which is *incredibly* expensive to run). The kicker? It apparently does this on a single 80GB GPU. Thats impressive if true, but lets not get carried away just yet.
2. **GPT-oss-20b**: The smaller sibling thats currently helping me draft this very blog post. OpenAI says its on par with their **o3-mini** and can run on a measly 16GB of memory. That makes it perfect for edge devices, local inference, or when you dont want to spend your life savings on cloud credits.
Both models are also supposed to be ace at tool use, few-shot function calling, CoT reasoning, and even health-related tasks—outperforming some proprietary models like GPT-4 in certain cases. Impressive? Sure. But lets not forget that OpenAI has a history of making bold claims.
## The Great AI Model Exodus: Why Were Here
Over the past year or so, the AI community has been moving away from GPT-based models—not because they were bad (they werent), but because they were closed-source and expensive to use at scale. Developers wanted more control, transparency, and affordability. Enter the rise of open-source and open-weight models like:
* **Googles Gemini (Gemma)** series
* **Microsofts Phi** series (yes, that Microsoft—ironically, OpenAI is a subsidiary)
* The **Qwen** series
* And others like **Llama** and **Deepseek**
These models have been a breath of fresh air for developers. Theyre free to use, tweak, and integrate into projects without worrying about pesky API limits or astronomical costs. Its like the AI world finally got its own version of Linux—except with neural networks. But then OpenAI showed up with GPT-OSS. And now everyone is asking: Why?
## Is This an Embrace-Extend-Extinguish Play?
Ah, the classic **Embrace, Extend, Extinguish** strategy. If youre not familiar, its a business tactic where a company adopts (embrace) an existing standard or technology, extends it with their own features, and then slowly extinguishes the competition by making their version incompatible or superior.
Now, Im not accusing OpenAI of anything here—just pointing out that theyre a Microsoft subsidiary, and Microsoft has a history of such strategies. Whether this is intentional or just good business sense is up for debate. But lets think about it:
* OpenAI has dominated the consumer AI market with their **ChatGPT** and other tools.
* Theyve been losing ground in the developer market, where models like [Gemini](https://deepmind.google/models/gemini/pro/) and particularly [Claude (Anthropic)](https://claude.ai/) are gaining traction in the proprietary space.
* Now theyre releasing open weight models that promise to compete at GPT-4 levels to try and bring in the Deepseek and Qwen crowd.
The timing feels a bit too convenient. OpenAI is essentially saying: “We get it. You want local, affordable, and flexible AI? Weve got you covered.” But will this be enough to win back the developer community? Or are they just delaying the inevitable?
## The Real Power of Local Models
Lets not sugarcoat it: For developers, the real value of AI isnt in chatbots or viral social media trends. Its in building tools that can automate, analyze, and enhance existing workflows. Think:
* Summarizing thousands of documents in seconds.
* Automating customer support with natural language processing.
* Creating dynamic content for apps and websites on the fly.
This is where AI shines—and where OpenAI has been losing market and mind share. Their focus on consumer-facing tools like ChatGPT has made them a household name, but its also left developers feeling overlooked. Now, with GPT-OSS, OpenAI is trying to bridge that gap. But will they succeed? Or are they just too late to the party?
## The Dark Side of Monocultures
One thing Im deeply concerned about is the potential for a monoculture in AI. If OpenAI manages to dominate the open-source space with GPT-OSS, we could end up in a world where everyone uses variations of the same model. Its not just about vendor lock-in—its about stifling innovation. When every developer uses the same tools and approaches, we lose the diversity that drives progress.
I want to see a future where there are **multiple open-source or at the very least open weight models**, each with their own strengths and weaknesses. That way, developers can choose what works best for their needs instead of being forced into one ecosystem.
## Testing the Waters: My Journey With GPT-OSS
This blog post was partly written by GPT-oss-20b. Its fast, its local, and its surprisingly good at generating content. But is it better than open weight alternatives like Deepseek or Gemma (the open weight gemini)? Thats the million-dollar question.
Ive been testing out various models for my own projects, and I can say this much: GPT-OSS feels like a solid contender. Its fast, easy to integrate, and—dare I say it—fun to work with. But until I put it head-to-head with other models, I wont be ready to crown it the king of AI.
## Final Thoughts: The Future of AI is in Our Hands
The release of GPT-OSS is a big deal—not just for OpenAI, but for the entire AI community. Its a reminder that even closed-source giants can (and should) listen to their users. But lets not get carried away. OpenAI isnt the only game in town anymore. Models like Gemini, Claude in the proprietary space, and Qwen and Llama in open source space are proving that diversity is key to innovation.
As developers, we have the power to choose which models succeed—and by extension, shape the future of AI. Lets make sure were making choices that benefit the community as a whole, not just a single company. After all, the last thing we need is another **AI monoculture**.

View File

@ -1,114 +0,0 @@
Title: Integrating Ollama and Matrix with Baibot
Date: 2025-06-25 20:00
Modified: 2025-06-30 08:00
Category: AI, Data, Matrix
Tags: ai, kubernetes, matrix
Slug: ollama-matrix-integration
Authors: Andrew Ridgway
Summary: Integrating a Local LLM to a personal matrix server all the fun AND data sovereignty
### _Human Introduction_
I've been experimenting with AI and integrations I'm particuarly excited by the idea of using LLM's to integrate between different systems (Stay tuned for a blog [MCP](https://modelcontextprotocol.io/introduction) at some point in the future!)
Below I've thrown together some notes and had AI build a very quick how to on a cool little project that took next to no time to put together that I thought might be interesting for the group.. Enjoy!
# Matrix AI Integrations with baibot: A Fun Journey into Home Automation and LLMs
Alright, so Ive been messing around with this cool project called **baibot**, which is a locally deployable bot for integrating Large Language Models (LLMs) into Matrix chatrooms. If youre anything like me, you run your own Matrix server to keep things private and under control—whether its for family communication or interacting with the tech community. But one day, I thought, “Why not have my LLMs right where Im already managing everything else?” Enter baibot.
**Setting Up My Own Matrix Server with baibot**
First off, Ive got a home Matrix server running Element. Integrating baibot into this environment makes sense because it allows me to connect directly via the same platform. The key was getting the configuration right using examples from [baibots GitHub](https://github.com/etkecc/baibot/blob/main/docs/sample-provider-configs/ollama.yml). For instance, connecting to an Ollama gemma3 model with a specific prompt ensures its lighthearted yet responsive:
```yaml
base_url: http://<my_ollama_ip>:11434/v1
text_generation:
model_id: gemma3:latest
prompt: 'You are a lighthearted bot...'
temperature: 0.9
max_response_tokens: 4096
max_context_tokens: 128000
```
This gives me precise control over the bots behavior, ensuring each instance in Matrix rooms behaves exactly as intended.
**Deploying to Kubernetes**
To ensure reliability, I used Kubernetes. Here's a breakdown of the key files:
* **Deployment.yaml**: Manages pod replicas, security contexts, and volume mounts for persistence.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ridgway-bot
name: ridgway-bot
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
- image: ghcr.io/etkecc/baibot:v1.7.4
name: baibot
volumeMounts:
- name: ridgway-bot-cm
mountPath: /app/config.yml
- name: ridgway-bot-pv
mountPath: /data
volumes:
- name: ridgway-bot-cm
configMap:
name: ridgway-bot
- name: ridgway-bot-pv
persistentVolumeClaim:
claimName: ridgway-bot-storage
```
* **Persistent Volume Claim (PVC)** ensures data storage for baibot.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ridgway-bot-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
```
The deployment script handles namespace creation, config maps, PVCs, and waits for the pod to be ready before copying data.
**Integrating with OpenWebUI for RAG**
Another cool aspect is integrating baibot with **OpenWebUI**, which acts as an OpenAI-compatible API. This allows me to leverage models Ive created in OpenWebUI that include knowledge bases (RAG). The config here uses OpenWebUIs endpoints:
```yaml
base_url: 'https://<my-openwebui-endpoint>/api/'
api_key: <my-openwebui-api-key>
text_generation:
model_id: andrew-knowledge-base
prompt: 'Your name is Rodergast...'
```
This setup lets me access RAG capabilities directly within Matrix chats, all without writing a single line of code. Its like having my very own AI research assistant right there in the chatroom.
**Future Steps and Challenges**
Now that baibot is up and running, Im already thinking about expanding its use cases. The next step might be integrating it with **Home Assistant** for alarm notifications or other automation tasks. However, my current setup uses an older gaming PC, which struggles with computational demands. This could lead to a rearchitecting effort—perhaps moving to a dedicated server or optimizing the hardware.
**Conclusion**
Baibot has been a fantastic tool for experimenting with AI integrations in Matrix. By leveraging existing infrastructure and OpenWebUIs capabilities, Ive achieved full control over data privacy and customization. The next frontier is expanding these integrations into more practical applications like home automation. Stay tuned for updates!
**Final Thoughts**
Its incredibly rewarding to see how open-source projects like baibot democratize AI access. Whether youre a hobbyist or a pro, having tools that let you run LLMs locally without vendor lock-in is game-changing. If youre interested in diving deeper, check out the [baibot GitHub](https://github.com/etkecc/baibot) and explore its documentation. Happy coding!

View File

@ -1,93 +0,0 @@
Title: MCP and Ollama - Local Assistant is getting nearer
Date: 2025-07-24 20:00
Modified: 2025-07-24 20:00
Category: AI
Tags: tech, ai, ollama, mcp, ai-tools
Slug: mcp-ollama-local-assistant-soon
Authors: Andrew Ridgway
Summary: An Exploration of the Model Context Protocol and its potential to revolutionise how we interact with AI
## Human Introduction
So for today's blog I've upped the model paramters on both the editors and a couple drafters.. and I have to say I think we've nailed what my meagre hardware can achieve in terms of content production. The process take 30 more minutes than before to churn now but that quality output more than makes up for it. For context we are now using:
- _Editor_: Gemma3:27b
- _Journalist 1_: phi4-mini:latest
- _Journalist 2_: phi4:latest
- _Journalist 3_: deepseek-r1:14b <-> _I know but it **is** good even if it won't talk about Tiananmen Square_
- _Journalist 4_: qwen3:14b
As you can see if you compare some of the other blogs this blog has really nailed tone and flow. Some of the content was wrong.. it thought I "wrote" [MCPO](https://github.com/open-webui/mcpo), I didn't, I wrapped it, and the sign off was very cringe but otherwise the blog is largely what came out from the editor.
As I get better hardware and can run better models, I fully see this being something that could potentially not need much editing on this side.. have to see how it goes moving forward... anyways, without futher adieu, Behold.. MCP and Ollama - A blog _**about**_ AI _**by**_ AI
## Introduction: Beyond the Buzzwords A Real Shift in AI
For the last couple of weeks, Ive been diving deep into **MCP** both for work and personal projects. Its that weird intersection where hobbies and professional life collide. Honestly, I was starting to think the whole AI hype was just that hype. But MCP? Its different. Its not just another buzzword; it feels like a genuine shift in how we interact with AI. Its like finally getting a decent internet connection after years of dial-up.
The core of this change is the **Model Context Protocol** itself. Its an open specification, spearheaded by **Anthropic**, but rapidly gaining traction across the industry. Googles thrown its weight behind it with [MCP Tools](https://google.github.io/adk-tools/mcp-tools/), and Amazons building it into [Bedrock Agent Core](https://aws.amazon.com/bedrock/agent-core/). Even Apple, with its usual air of exclusivity, is likely eyeing this space.
## What *Is* MCP, Anyway? Demystifying the Protocol
Okay, lets break it down. **MCP** is essentially a standardized way for **Large Language Models (LLMs)** to interact with **tools**. Think of it as giving your AI a set of keys to your digital kingdom. Instead of just *talking* about doing things, it can actually *do* them.
Traditionally, getting an LLM to control your smart home, access your code repository, or even just send an email required a ton of custom coding and API wrangling. MCP simplifies this process by providing a common language and framework. Its like switching from a bunch of incompatible power adapters to a universal charger.
The beauty of MCP is its **openness**. Its not controlled by a single company, which fosters innovation and collaboration. Its a bit like the early days of the internet a wild west of possibilities.
## My MCP Playground: Building a Gateway with mcpo
I wanted to get my hands dirty, so I built a little project wrapping [**mcpo**](https://github.com/open-webui/mcpo) in a container that can pull in config to create a containerised service. Its a gateway that connects **OpenWebUI** a fantastic tool for running LLMs locally with various **MCP servers**.
The goal? To create a flexible and extensible platform for experimenting with different AI agent tools within my build pipeline. I wanted to be able to quickly swap out different models, connect to different services, and see what happens. Its a bit like having a LEGO set for AI you can build whatever you want.
You can check out the project [here](https://git.aridgwayweb.com/armistace/mcpo_mcp_servers). If youre feeling adventurous, I encourage you to clone it and play around. Ive got it running in my **k3s cluster** (a lightweight Kubernetes distribution), but you can easily adapt it to Docker or other containerization platforms.
## Connecting the Dots: Home Assistant and Gitea Integration
Right now my wrapper supports two key services: **Home Assistant** and **Gitea**.
**Home Assistant** is my smart home hub it controls everything from the lights and thermostat to the security system. Integrating it with mcpo allows me to control these devices using natural language commands. Imagine saying, “Hey AI, dim the lights and play some jazz,” and it just happens. Its like living in a sci-fi movie.
**Gitea** is my self-hosted Git service its where I store all my code. Integrating it with mcpo allows me to use natural language to manage my repositories, create pull requests, and even automate code reviews. Its like having a personal coding assistant.
I initially built a custom **Gitea MCP server** to get familiar with the protocol. But the official **Gitea-MCP** project ([here](https://gitea.com/gitea/gitea-mcp)) is much more robust and feature-rich. Its always best to leverage existing tools when possible.f
Bringing in new MCP servers should be as simple as updating the config to provide a new endpoint and, if using stdio, updating the build script to bring in the mcp binary or git repo with the mcp implementation you want to use.
## The Low-Parameter Model Challenge: Balancing Power and Efficiency
Im currently experimenting with **low-parameter models** like **Qwen3:4B** and **DeepSeek-R1:14B**. These models are relatively small and efficient, which makes them ideal for running on local hardware. However, they also have limitations.
One of the biggest challenges is getting these models to understand complex instructions. They require very precise and detailed prompts. Its like explaining something to a child you have to break it down into simple steps.
Another challenge is managing the context window. These models have a limited memory, so they can only remember a certain amount of information. This can make it difficult to have long and complex conversations.
## The Future of AI Agents: Prompt Engineering and Context Management
I believe the future of AI lies in the development of intelligent **agents** that can seamlessly interact with the world around us. These agents will need to be able to understand natural language, manage complex tasks, and adapt to changing circumstances.
**Prompt engineering** will be a critical skill for building these agents. Well need to learn how to craft prompts that elicit the desired behavior from the models. Almost like coding in a way but with far less structure and no need to understand the "syntax". But we're a long way from here yet
**Context management** will also be crucial. Well need to develop techniques for storing and retrieving relevant information, so the models can make informed decisions.
## Papering Over the Cracks: Using MCP to Integrate Legacy Systems
At my workplace, were exploring how to use MCP to integrate legacy systems. Many organizations have a patchwork of different applications and databases that dont easily communicate with each other.
MCP can act as a bridge between these systems, allowing them to share data and functionality. Its like building a universal translator for your IT infrastructure.
This can significantly reduce the cost and complexity of integrating new applications and services, if we get the boilerplate right.
## Conclusion: The Dawn of a New Era in AI
MCP is not a silver bullet, but its a significant step forward in the evolution of AI. It provides a standardized and flexible framework for building intelligent agents that can seamlessly interact with the world around us.
Im excited to see what the future holds for this technology. I believe it has the potential to transform the way we live and work.
If youre interested in learning more about MCP, I encourage you to check out the official website ([https://modelcontextprotocol.io/introduction](https://modelcontextprotocol.io/introduction)) and explore the various projects and resources that are available.
And if youre feeling adventurous, I encourage you to clone my mcpo project ([https://git.aridgwayweb.com/armistace/mcpo_mcp_servers](https://git.aridgwayweb.com/armistace/mcpo_mcp_servers)) and start building your own AI agents.
It's been a bit of a ride. Hopefully I'll get a few more projects that can utilise some of these services but with so much new stuff happening my 'ooo squirell' mentality could prove a bit of a headache... might be time to crack open the blog_creator and use crew ai and mcp to create some research assistants on top of the drafters and editor!
Talk soon!

View File

@ -1,93 +0,0 @@
Title: Recovering Archlinux Qemu VM in Proxmox
Date: 2025-07-01 20:00
Modified: 2025-07-01 20:00
Category: SysAdmin
Tags: System Admin, Proxmox, Qemu, Arch, Kubernetes
Slug: recovering-arch-vm-proxmox
Authors: Andrew Ridgway
Summary: An absolute nightmare of a day trying to recover my kube cluster from a silly update error
### Human Edit
This is probably the most amazing run of the blog creator, I've started using the new gemma3n and also upgrade the box ollama runs on so it can run slightly bigger models. Using phi4 and gemma:27b has produced some amazing results see below
I *did* need to update some of the pacman stuff as it conflated to seperate issues so bear in mind I have made some little edits in that place but otherwise... this is straight from the mouth of the llm. Enjoy!
# Recovering an Archlinux QEMU VM in Proxmox: A Day in Hell and Back Again
Well that was a morning. Today I wanted to try and fix my Longhorn installation in Kube... (again 😥). It turns out, things didn't go as planned.
## The Unexpected Downfall
I went to perform my usual update and reboot... except today for whatever reason, the upgrade decided to fail to install the kernel and left me with an unbootable system.
### Dropping into Grub Rescue
At this point I dropped back down to grub rescue mode (which is always fun). Honestly? I hate that environment! And then it hit me: these systems are just QEMU disks, right? Surely I can mount them, chroot in, and fix the install.
## The Quest for Recovery
It took 2 hours of frantic Googling through Proxmox and Arch forums until I stumbled upon something... almost magical.
### Mounting QEMU Disks Made Easy
I found an amazing suite of tools to make mounting these qemu disks a breeze. Check out this [guide](https://www.howtogeek.com/devops/how-to-mount-a-qemu-virtual-disk-image/) for all the details on libguestfs-tools and guestmount.
#### Mounting in Action
```bash
sudo apt install libguestfs-tools
sudo guestmount --add /var/lib/pve/local-btrfs/images/100/vm-100-disk-0/disk.raw --mount /dev/sda3 /tmp/kube_disk/
```
### Enter Chroot Land
Now that I've got my disk mounted, it's time to chroot in. But hold up! I need it as root this time.
#### Setting Up Arch-Chroot
```bash
sudo apt install arch-installation-scripts
arch-chroot /tmp/kube_disk/
```
### Pacman: The Hero We Deserve (But Need Permission)
Oh boy, pacman threw 23 million permission errors my way. Last year they changed it to work rootless by default… but I found out you can turn off the `DefaultUser` flag in `/etc/pacman.conf`. Here's how:
```bash
# Disable DefaultUser temporarily for this session (or remove if permanent)
pacman -Syu
```
I did have a couple issues installing the kernel (which is what got borked in the update)
```bash
# Sometimes some files got written so use overwrite to get rid of them
# be warned this *could* be destructive
pacman -S linux --overwrite "*"
```
### Clean Up and Exit
Once we're done, we need to exit the chroot. Remember that crucial step: umounting correctly.
```bash
exit
sudo umount /tmp/kube_disk/
```
## The Reboot Saga (And How Not To Do It)
Reboot was supposed to be a smooth sail… but I made one fatal mistake.
### Corruption Nightmare
I didn't unmount before starting the VM. This led me down an unfortunate path of corrupting my btrfs partition beyond recognition and having to rebuild not just my master node, but *entire* cluster! Backups saved the day... barely!
#### Lessons Learned
* **Never** reboot without first properly umounting.
* Seriously need more backups for those images. 🚀
## Conclusion: A Cluster-Fucked Day Turned Into a Learning Experience
All in all it was chaos, but hey learning happens at 2 AM after midnight reboots and frantic Googling. Hope this helps anyone else stuck with Proxmox woes! 🚀

View File

@ -1,53 +0,0 @@
Title: The Failing Social Media Ban
Date: 2025-06-19 20:00
Modified: 2025-06-20 20:00
Category: Politics
Tags: politics, social meda, tech policy
Slug: social-media-ban-fail
Authors: Andrew Ridgway
Summary: The Social Media ban is an abject failure of policy. Education and the use of the much better existing tools is the key
## 🎯 The Goal: A Legal Framework to Protect Kids
The Australian governments or should I say Julie Inman's plan to ban social media for teens has sparked on going debate. While the intention is noble—protecting minors from online risks—its clear the technical and legal hurdles are massive. This government concept of relying on “facial aging” or “Proof of Age” APIs are prone to privacy violations and data breaches. Parents already have tools that let them make decisions about their childrens tech use without needing to hand over photos of their ID. The governments current approach is mired in bureaucracy and the tech world does not thrive in that environment. Instead of trying to outsource the problem to consultants, the government should **educate parents on the tools already available**.
## 🧩 The Problem: Tech Giants Wont Do It
The governments plan to enable Inman's vision is to use facial recognition or “age-based” filters. This was flawed from the start. These systems are expensive, unreliable, and not designed for the scale of a national rollout. Even if a company like Meta or Google could do it, theyd **never** do it for the same reason: **There is no money in the equation**. The only alternative is to outsource to consultants, but those consultants are not equipped to handle the complexity. The governments plan is a joke, no one is going to build a system thats 100% accurate, secure, and compliant with privacy laws and those that, maybe, could have no insentive to. No amount of chest thumping by The E-Safety Commissioner will change this fact and throwing frankly meaningless pieces of paper from our legislative assembly will do little more than make them laugh
## 🛠️ The Tools Parents Already Have
Parents ([Is it parents? is it in fact fiefdom creation on behalf of Julie Inman?](https://minister.infrastructure.gov.au/rowland/media-release/record-investment-improve-safety-australians-online)) must give up on the idea of the government fixing this. , parents should be using the **tools already in their homes**. These tools are **free, secure, and effective**. Some examples include (and I use in my own home):
* **Fritz Box Parental Controls** (https://en.fritz.com/service/knowledge-base/dok/FRITZ-Box-7530/8_Restricting-internet-use-with-the-FRITZ-Box-parental-controls/) - Allows blocking of websites and apps, setting time limits, and creating user profiles.
* **Microsoft Family Safety** (https://www.microsoft.com/en-au/microsoft-365/family-safety) - Provides screen time limits, content filters, and activity reporting.
* **Nintendo Parental Controls** (https://www.nintendo.com/au/apps/parental-controls/) - Allows managing game time, content restrictions, and communication settings on Nintendo devices.
* **Google Family Link** (https://families.google.com/familylink/) - Enables remote monitoring, app management, and location tracking for children's Android devices.
* **Apple Family Sharing** (https://support.apple.com/en-au/105121) - Allows sharing purchases, subscriptions, and location information with family members.
These tools let parents **block apps, limit screen time, and monitor online activity** without needing to share sensitive data. They offer parents full control over what is available and are not dependant on some arbitrary list governed in legislation (which is in an of itself an indicator of how backwards this legislation is)
## 📚 The Real Solution: Education, Not Tech
The governments plan is a **mistake**. Instead of trying to build a new system, parents should be **educating themselves on the tools already available**.
### 🔄 Flexibility for Every Family
* **Approved apps**
* **Blacklisted content**
* **Screen time limits**
* **Privacy controls**
These tools let parents **make decisions tailored to their childrens needs**. No one-size-fits-all approach. It gives parents autonomy over their online decision making whilst better respecting everyones privacy, including the childs. Already Julie is making calls to expand the list, this is unacceptable, it is no one but MY choice what is acceptable in my house and for my family.
## 🧩 Why the Governments Plan Fails
The governments plan is a **disaster**. Its not about fixing the problems of social media use in teens, its about giving the perception they are doing something about it using archaic methods and tools that don't go to the root cause. The tools parents already have are **better, cheaper, and more secure**. The only way to make this work is for the government to **stop trying to solve a social problem with tech** and **focus on the real solution: education and parental autonomy**. Stop Letting Julie create her cartel and create her own version of the Chinese firewall
## 📝 Summary: The Right Tools, Not the Tech
The governments plan is a dead monkey. Instead of trying to build a system thats 100% accurate and secure, parents should be using the **tools already in their homes**. These tools are **free, effective, and preserve privacy**. They let parents **make decisions about their childrens tech use on a true case by case basis** without needing to hand over sensitive data.
## 🧩 Final Thoughts
The Government's plan, at the behest of Julie Inman, is a **disaster**. Its not about fixing the problem with social media, its about creating the perception they are solving a problem that is already solved. [The E-Safety Commissioner has made clear her plans are to take control out of our hands when it comes to what we can do online](https://www.esafety.gov.au/newsroom/media-releases/online-industry-asked-address-esafetys-concerns-draft-codes-0#:~:text=Online%20industry%20asked%20to%20address%20eSafety%27s%20concerns%20with%20draft%20codes,-Share&text=Australia%27s%20eSafety%20Commissioner%20has%20asked,safeguards%20for%20users%20in%20Australia.) Parents should be using the **tools already in their homes**. The real solution is not to expect a government to fix this, but to **educate themselves on the tools that already exist**. Until we accept that this is our responsbility the problem will continue propogate because the only place it can be fixed is in the home and not my Julie Inam.

View File

@ -1,82 +0,0 @@
Title: When to use AI
Date: 2025-06-05 20:00
Modified: 2025-06-06 08:00
Category: AI, Data
Tags: ai, python
Slug: when-to-use-ai
Authors: Andrew Ridgway
Summary: Should we be using AI for ALL THE THINGS!?
# Human Introduction
Well.. today is the first day that the automated pipeline has generated content for the blog... still a bit of work to do including
1. establishing a permanent vectordb solution (chromadb? pg_vector?)
2. Notification to Matrix that something has happened
3. Updating Trilium so that the note is marked as blog_written=true
BUT it can take a note from trilium, generate drafts with mulitple agents, and then use RAG to have an editor go over those drafts.
I'm particularly proud of the randomness I've applied to temperature, top_p and top_k for the different draft agents. This means that each pass is giving me quite different "creativity" (as much as that can be applied to an algorithm that is essentially munging letters together that have a high probability of being together) It has created some really interesting variation for the editor to work with and getting some really interesting results.
Anyways, without further ado, I present to you the first, pipeline written, AI content for this blog
---
# When to Use AI: Navigating the Right Scenarios
Okay, so I've been getting this question a lot lately: "When should we use AI?" or even more frustratingly, "Why can't AI do this?" It's like asking when to use a hammer versus a screwdriver. Sometimes AI is the perfect tool, other times it's better left in the toolbox. Let me break down some scenarios where AI shines and where it might not be the best bet.
## The Spreadsheet Dilemma: Where AI Can help, and where it hurts
**Scenario:** Mapping work types to categories in a spreadsheet with thousands of entries, like distinguishing between "Painting," "Repainting," "Deck Painting," or "Stucco Repainting."
**Where AI Helps:**
* **Fuzzy Matching & Contextual Understanding:** AI excels at interpreting relationships between words (e.g., recognizing "Deck Painting" as a subset of "Painting"). However, traditional methods with regex or string manipulation fail here because they lack the nuanced judgment needed to handle ambiguity.
**Where AI Struggles:**
* **Precision Over Ambiguity:** Calculations requiring exact values (e.g., average durations) are better handled by deterministic algorithms rather than AIs probabilistic approach.
**Traditional Methods Are Easier for Deterministic Problems:**
* **Formula-Based Logic:** Building precise formulas for workload analysis relies on clear, unambiguous rules. AI cant replace the need for human oversight in such cases.
## When AI Shines: Contextual and Unstructured Tasks
**Scenario:** Automating customer support with chatbots or analyzing social media sentiment.
**Why AI Works Here:**
* **Natural Language Processing (NLP):** AI understands context, tone, and intent in unstructured data, making it ideal for tasks like chatbot responses or content analysis.
* **Pattern Recognition:** AI identifies trends or anomalies in large datasets that humans might miss, such as predictive maintenance in industrial settings.
**Why Traditional Methods Don't:**
* **There is no easily discernable pattern:** If the pattern doesn't exist in a deterministic sense there will be little someone can do without complex regex and 'whack a mole' style programming.
## Hybrid Approaches: The Future of Efficiency
While traditional methods remain superior for precise calculations, AI can assist in setting up initial parameters or generating insights. For example:
* **AI Proposes Formulas:** An LLM suggests a workload calculation formula based on historical data.
* **Human Checks Validity:** A human ensures the formulas accuracy before deployment.
## Key Takeaways
1. **Use AI** for tasks involving:
* Unstructured data (e.g., text, images).
* Contextual understanding and interpretation.
* Pattern recognition and trend analysis.
2. **Stick to Traditional Methods** for:
* Precise calculations with deterministic logic.
* Tasks requiring error-free accuracy (e.g., financial modeling).
## Conclusion
AI is a powerful tool but isnt a one-size-fits-all solution. Match the right approach to the task at hand—whether its interpreting natural language or crunching numbers. The key is knowing when AI complements human expertise rather than replaces it.
**Final Tip:** Always consider the trade-offs between precision and context. For tasks where nuance matters, AI is your ally; for rigid logic, trust traditional methods.
🚀

View File

@ -50,9 +50,9 @@
{% endif %} {% endif %}
<!-- Custom Fonts --> <!-- Custom Fonts -->
<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"> <link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href='https://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
@ -137,8 +137,8 @@
</a> </a>
</li> </li>
</ul> </ul>
<p class="copyright text-muted">Blog powered by <a href="https://getpelican.com">Pelican</a>, <p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
which takes great advantage of <a href="https://python.org">Python</a>.</p> which takes great advantage of <a href="http://python.org">Python</a>.</p>
</div> </div>
</div> </div>
</div> </div>