Compare commits

..

No commits in common. "master" and "recovering_an_archlinux_qemu_vm_in_proxmox" have entirely different histories.

4 changed files with 1 additions and 222 deletions

View File

@ -1,41 +0,0 @@
Title: Apple And The Anti-Dev Platform
Date: 2025-08-28 20:00
Modified: 2025-08-28 20:00
Category: Tech, Software, Apple
Tags: Tech, Software, Apple
Slug: apple-anti-dev
Authors: Andrew Ridgway
Summary: Apples requirements for developers are onerous, I detail some of the frustrations I've had whilst dealing with the platform to deploy a small app as part of my day job
## Introduction: Why I Hate Loving to Hate Apple
This week, I found myself in the unenviable position of using MacOS for work. It was like revisiting an old flame only to realize theyve become *that* person—still attractive from afar, but toxic up close. Let me clarify: Im not anti-Apple per se. I appreciate their design aesthetic as much as anyone. But when youre a developer, especially one with a penchant for Linux and a deep love for open-source, Apples ecosystem feels like walking into a store where the sign says "Employee Discounts" but they charge you double for the privilege.
## 1. The Hardware-Software Tie-In: Why Buy New Every Year?
Lets talk about my borrowed MacBook from 2020. It was a kind gesture, right? But heres the kicker: this machine, which was cutting-edge just five years ago, is now deemed too old to run the latest MacOS. I needed Xcode for a project, and guess what? You cant run the latest version of Xcode without the latest MacOS. So, to paraphrase: "Sorry, but your device isnt *new enough* to develop on the Apple platform anymore." This isnt just inconvenient; its a deliberate strategy to force upgrades. Its like buying a car that requires you to upgrade your entire garage every year just to keep it running.
## 2. Forced Obsolescence: The New "Upgrade" Cycle
Yes, Microsoft did the whole TPM 2.0 thing with Windows 11. But Apple takes it to another level. Theyve turned hardware into a subscription model without you even realizing it. You buy a device, and within a few years, its obsolete for their latest software and tools. This isnt about security or innovation—its about control. Why release an operating system that only works on devices sold in the last 12 months? It creates a false market for "new" hardware, padding Apples margins at the expense of developers and users.
## 3. High Costs: The Developer Fee That Keeps On Giving
I honestly believe this actually boils down to money? To develop on Apples platform, you need an Apple Developer account. This costs $150 AUD a year. Now, if I were to buy a new MacBook Pro today, that would set me back around $2,500 AUD. And for what? The privilege of being able to build apps on my own device? Its like paying a toll every year just to use the road you already own. Its enough to make you consider a career change and become a sheep farmer.
## 4. Lack of Freedom: Who Owns the Device Anyway?
Heres where it gets really egregious: Apples developer review process. Its like being subjected to a TSA pat-down every time you want to build something, even if it's just for your own device. To deploy ANYTHING onto an IOS device I need to hand my Government issued license over to Apple and let them "check I'm a real person". And no this isn't just for the app store deployments, which I can understand. This is for any deployment, it's the only way to get a certificate to cross sign on the app and device... Google might be heading down a similar path, but at least you'll be able to on custom Android ROmS. On Apple, it feels like every step is designed to remind you that youre dancing in their sandbox—and they call the shots. If you use IOS you have to dance to their tune AT ALL TIMES.
## 5. The "Apple Tax": A Future Job Requirement
I think all developers and consultants should demand an "Apple Tax." It will be simple:
* $5,000 AUD for new Apple hardware.
* An additional 25% markup on development hours spent navigating Apples ecosystem.
Why? Because it's time developers passed on these costs to the users. It's time to make this hurt the consumers who insist on using these products with predatory business models for developers. Yes, developers go where the market is, but it's time to start charging that market so it understands the true cost to be there.
## Conclusion: Why Ill Keep Hating Loving to Hate Apple
Apples ecosystem feels like a love story gone wrong—a relationship where one party keeps raising the stakes just to remind you of how much they control everything. Developers are supposed to be the disruptors, the rebels who challenge the status quo. But when your tools are designed to keep you tethered to a specific platform and its outdated business model, it feels less like innovation and more like indentured servitude. If youre still enamored with Apples ecosystem and think its “just part of the game,” I urge you to take a long, hard look in the mirror. Because if this is your idea of progress, were all in trouble.

View File

@ -1,87 +0,0 @@
Title: GPT OSS - Is It Embrace, Extend, Extenguish
Date: 2025-08-12 20:00
Modified: 2025-08-14 20:00
Category: Politics, Tech, AI
Tags: politics, tech, Ai
Slug: gpt-oss-eee
Authors: Andrew Ridgway
Summary: GPT OSS is here from Open AI, the first open weight model from them since GPT-2. My question is... why now?
# Human Introduction
This has been a tough one for the publishing house to get right. I've had it generate 3 different drafts and this is still the result of quite the edit. Today's blog was written by:
1. Gemma:27b - Editor
2. GPT-OSS - Journalist
3. Qwen3:14b - Journalist
4. phi4:latest - Journalist
5. deepseek-r1:14b - journalist
The big change from last time is the addition of gpt-oss, which is of course the focus of hte topic today. It's quite the open weight model, haven't played with the tooling yet but I'm exceited to see what it can do, even if I do have questions.
Anyways, without further ado! GPT-OSS is it EEE? written by AI... For AI?
# GPT OSS - Is It EEE?
## Introduction: The Return of OpenAI (With Some Questions)
This week, the AI world got a bit busier than usual. OpenAI dropped their [**GPT-OSS**](https://openai.com/index/introducing-gpt-oss/) models, and it feels like theyre trying to make up for lost time—or maybe just remind everyone that theyre still in the game. The release has sparked a lot of excitement, but also some confusion. Are these models really as good as they claim? And why now? Lets break this down with all the drama, intrigue, and a dash of humor youve come to expect from your friendly neighborhood tech writer.
## What Exactly Is GPT-OSS Anyway?
OpenAI has thrown two models into the ring:
1. **GPT-oss-120b**: A hefty 120 billion parameter model that theyre claiming can “hold its own” against their own **o4-mini** (which is *incredibly* expensive to run). The kicker? It apparently does this on a single 80GB GPU. Thats impressive if true, but lets not get carried away just yet.
2. **GPT-oss-20b**: The smaller sibling thats currently helping me draft this very blog post. OpenAI says its on par with their **o3-mini** and can run on a measly 16GB of memory. That makes it perfect for edge devices, local inference, or when you dont want to spend your life savings on cloud credits.
Both models are also supposed to be ace at tool use, few-shot function calling, CoT reasoning, and even health-related tasks—outperforming some proprietary models like GPT-4 in certain cases. Impressive? Sure. But lets not forget that OpenAI has a history of making bold claims.
## The Great AI Model Exodus: Why Were Here
Over the past year or so, the AI community has been moving away from GPT-based models—not because they were bad (they werent), but because they were closed-source and expensive to use at scale. Developers wanted more control, transparency, and affordability. Enter the rise of open-source and open-weight models like:
* **Googles Gemini (Gemma)** series
* **Microsofts Phi** series (yes, that Microsoft—ironically, OpenAI is a subsidiary)
* The **Qwen** series
* And others like **Llama** and **Deepseek**
These models have been a breath of fresh air for developers. Theyre free to use, tweak, and integrate into projects without worrying about pesky API limits or astronomical costs. Its like the AI world finally got its own version of Linux—except with neural networks. But then OpenAI showed up with GPT-OSS. And now everyone is asking: Why?
## Is This an Embrace-Extend-Extinguish Play?
Ah, the classic **Embrace, Extend, Extinguish** strategy. If youre not familiar, its a business tactic where a company adopts (embrace) an existing standard or technology, extends it with their own features, and then slowly extinguishes the competition by making their version incompatible or superior.
Now, Im not accusing OpenAI of anything here—just pointing out that theyre a Microsoft subsidiary, and Microsoft has a history of such strategies. Whether this is intentional or just good business sense is up for debate. But lets think about it:
* OpenAI has dominated the consumer AI market with their **ChatGPT** and other tools.
* Theyve been losing ground in the developer market, where models like [Gemini](https://deepmind.google/models/gemini/pro/) and particularly [Claude (Anthropic)](https://claude.ai/) are gaining traction in the proprietary space.
* Now theyre releasing open weight models that promise to compete at GPT-4 levels to try and bring in the Deepseek and Qwen crowd.
The timing feels a bit too convenient. OpenAI is essentially saying: “We get it. You want local, affordable, and flexible AI? Weve got you covered.” But will this be enough to win back the developer community? Or are they just delaying the inevitable?
## The Real Power of Local Models
Lets not sugarcoat it: For developers, the real value of AI isnt in chatbots or viral social media trends. Its in building tools that can automate, analyze, and enhance existing workflows. Think:
* Summarizing thousands of documents in seconds.
* Automating customer support with natural language processing.
* Creating dynamic content for apps and websites on the fly.
This is where AI shines—and where OpenAI has been losing market and mind share. Their focus on consumer-facing tools like ChatGPT has made them a household name, but its also left developers feeling overlooked. Now, with GPT-OSS, OpenAI is trying to bridge that gap. But will they succeed? Or are they just too late to the party?
## The Dark Side of Monocultures
One thing Im deeply concerned about is the potential for a monoculture in AI. If OpenAI manages to dominate the open-source space with GPT-OSS, we could end up in a world where everyone uses variations of the same model. Its not just about vendor lock-in—its about stifling innovation. When every developer uses the same tools and approaches, we lose the diversity that drives progress.
I want to see a future where there are **multiple open-source or at the very least open weight models**, each with their own strengths and weaknesses. That way, developers can choose what works best for their needs instead of being forced into one ecosystem.
## Testing the Waters: My Journey With GPT-OSS
This blog post was partly written by GPT-oss-20b. Its fast, its local, and its surprisingly good at generating content. But is it better than open weight alternatives like Deepseek or Gemma (the open weight gemini)? Thats the million-dollar question.
Ive been testing out various models for my own projects, and I can say this much: GPT-OSS feels like a solid contender. Its fast, easy to integrate, and—dare I say it—fun to work with. But until I put it head-to-head with other models, I wont be ready to crown it the king of AI.
## Final Thoughts: The Future of AI is in Our Hands
The release of GPT-OSS is a big deal—not just for OpenAI, but for the entire AI community. Its a reminder that even closed-source giants can (and should) listen to their users. But lets not get carried away. OpenAI isnt the only game in town anymore. Models like Gemini, Claude in the proprietary space, and Qwen and Llama in open source space are proving that diversity is key to innovation.
As developers, we have the power to choose which models succeed—and by extension, shape the future of AI. Lets make sure were making choices that benefit the community as a whole, not just a single company. After all, the last thing we need is another **AI monoculture**.

View File

@ -1,4 +1,4 @@
Title: Integrating Ollama and Matrix with Baibot
Title: Intergrating Ollama and Matrix with Baibot
Date: 2025-06-25 20:00
Modified: 2025-06-30 08:00
Category: AI, Data, Matrix

View File

@ -1,93 +0,0 @@
Title: MCP and Ollama - Local Assistant is getting nearer
Date: 2025-07-24 20:00
Modified: 2025-07-24 20:00
Category: AI
Tags: tech, ai, ollama, mcp, ai-tools
Slug: mcp-ollama-local-assistant-soon
Authors: Andrew Ridgway
Summary: An Exploration of the Model Context Protocol and its potential to revolutionise how we interact with AI
## Human Introduction
So for today's blog I've upped the model paramters on both the editors and a couple drafters.. and I have to say I think we've nailed what my meagre hardware can achieve in terms of content production. The process take 30 more minutes than before to churn now but that quality output more than makes up for it. For context we are now using:
- _Editor_: Gemma3:27b
- _Journalist 1_: phi4-mini:latest
- _Journalist 2_: phi4:latest
- _Journalist 3_: deepseek-r1:14b <-> _I know but it **is** good even if it won't talk about Tiananmen Square_
- _Journalist 4_: qwen3:14b
As you can see if you compare some of the other blogs this blog has really nailed tone and flow. Some of the content was wrong.. it thought I "wrote" [MCPO](https://github.com/open-webui/mcpo), I didn't, I wrapped it, and the sign off was very cringe but otherwise the blog is largely what came out from the editor.
As I get better hardware and can run better models, I fully see this being something that could potentially not need much editing on this side.. have to see how it goes moving forward... anyways, without futher adieu, Behold.. MCP and Ollama - A blog _**about**_ AI _**by**_ AI
## Introduction: Beyond the Buzzwords A Real Shift in AI
For the last couple of weeks, Ive been diving deep into **MCP** both for work and personal projects. Its that weird intersection where hobbies and professional life collide. Honestly, I was starting to think the whole AI hype was just that hype. But MCP? Its different. Its not just another buzzword; it feels like a genuine shift in how we interact with AI. Its like finally getting a decent internet connection after years of dial-up.
The core of this change is the **Model Context Protocol** itself. Its an open specification, spearheaded by **Anthropic**, but rapidly gaining traction across the industry. Googles thrown its weight behind it with [MCP Tools](https://google.github.io/adk-tools/mcp-tools/), and Amazons building it into [Bedrock Agent Core](https://aws.amazon.com/bedrock/agent-core/). Even Apple, with its usual air of exclusivity, is likely eyeing this space.
## What *Is* MCP, Anyway? Demystifying the Protocol
Okay, lets break it down. **MCP** is essentially a standardized way for **Large Language Models (LLMs)** to interact with **tools**. Think of it as giving your AI a set of keys to your digital kingdom. Instead of just *talking* about doing things, it can actually *do* them.
Traditionally, getting an LLM to control your smart home, access your code repository, or even just send an email required a ton of custom coding and API wrangling. MCP simplifies this process by providing a common language and framework. Its like switching from a bunch of incompatible power adapters to a universal charger.
The beauty of MCP is its **openness**. Its not controlled by a single company, which fosters innovation and collaboration. Its a bit like the early days of the internet a wild west of possibilities.
## My MCP Playground: Building a Gateway with mcpo
I wanted to get my hands dirty, so I built a little project wrapping [**mcpo**](https://github.com/open-webui/mcpo) in a container that can pull in config to create a containerised service. Its a gateway that connects **OpenWebUI** a fantastic tool for running LLMs locally with various **MCP servers**.
The goal? To create a flexible and extensible platform for experimenting with different AI agent tools within my build pipeline. I wanted to be able to quickly swap out different models, connect to different services, and see what happens. Its a bit like having a LEGO set for AI you can build whatever you want.
You can check out the project [here](https://git.aridgwayweb.com/armistace/mcpo_mcp_servers). If youre feeling adventurous, I encourage you to clone it and play around. Ive got it running in my **k3s cluster** (a lightweight Kubernetes distribution), but you can easily adapt it to Docker or other containerization platforms.
## Connecting the Dots: Home Assistant and Gitea Integration
Right now my wrapper supports two key services: **Home Assistant** and **Gitea**.
**Home Assistant** is my smart home hub it controls everything from the lights and thermostat to the security system. Integrating it with mcpo allows me to control these devices using natural language commands. Imagine saying, “Hey AI, dim the lights and play some jazz,” and it just happens. Its like living in a sci-fi movie.
**Gitea** is my self-hosted Git service its where I store all my code. Integrating it with mcpo allows me to use natural language to manage my repositories, create pull requests, and even automate code reviews. Its like having a personal coding assistant.
I initially built a custom **Gitea MCP server** to get familiar with the protocol. But the official **Gitea-MCP** project ([here](https://gitea.com/gitea/gitea-mcp)) is much more robust and feature-rich. Its always best to leverage existing tools when possible.f
Bringing in new MCP servers should be as simple as updating the config to provide a new endpoint and, if using stdio, updating the build script to bring in the mcp binary or git repo with the mcp implementation you want to use.
## The Low-Parameter Model Challenge: Balancing Power and Efficiency
Im currently experimenting with **low-parameter models** like **Qwen3:4B** and **DeepSeek-R1:14B**. These models are relatively small and efficient, which makes them ideal for running on local hardware. However, they also have limitations.
One of the biggest challenges is getting these models to understand complex instructions. They require very precise and detailed prompts. Its like explaining something to a child you have to break it down into simple steps.
Another challenge is managing the context window. These models have a limited memory, so they can only remember a certain amount of information. This can make it difficult to have long and complex conversations.
## The Future of AI Agents: Prompt Engineering and Context Management
I believe the future of AI lies in the development of intelligent **agents** that can seamlessly interact with the world around us. These agents will need to be able to understand natural language, manage complex tasks, and adapt to changing circumstances.
**Prompt engineering** will be a critical skill for building these agents. Well need to learn how to craft prompts that elicit the desired behavior from the models. Almost like coding in a way but with far less structure and no need to understand the "syntax". But we're a long way from here yet
**Context management** will also be crucial. Well need to develop techniques for storing and retrieving relevant information, so the models can make informed decisions.
## Papering Over the Cracks: Using MCP to Integrate Legacy Systems
At my workplace, were exploring how to use MCP to integrate legacy systems. Many organizations have a patchwork of different applications and databases that dont easily communicate with each other.
MCP can act as a bridge between these systems, allowing them to share data and functionality. Its like building a universal translator for your IT infrastructure.
This can significantly reduce the cost and complexity of integrating new applications and services, if we get the boilerplate right.
## Conclusion: The Dawn of a New Era in AI
MCP is not a silver bullet, but its a significant step forward in the evolution of AI. It provides a standardized and flexible framework for building intelligent agents that can seamlessly interact with the world around us.
Im excited to see what the future holds for this technology. I believe it has the potential to transform the way we live and work.
If youre interested in learning more about MCP, I encourage you to check out the official website ([https://modelcontextprotocol.io/introduction](https://modelcontextprotocol.io/introduction)) and explore the various projects and resources that are available.
And if youre feeling adventurous, I encourage you to clone my mcpo project ([https://git.aridgwayweb.com/armistace/mcpo_mcp_servers](https://git.aridgwayweb.com/armistace/mcpo_mcp_servers)) and start building your own AI agents.
It's been a bit of a ride. Hopefully I'll get a few more projects that can utilise some of these services but with so much new stuff happening my 'ooo squirell' mentality could prove a bit of a headache... might be time to crack open the blog_creator and use crew ai and mcp to create some research assistants on top of the drafters and editor!
Talk soon!