Compare commits

..

No commits in common. "master" and "gpt_oss__is_it_eee" have entirely different histories.

10 changed files with 53 additions and 399 deletions

View File

@ -1,72 +1,61 @@
name: Build and Push Image name: Build and Push Image
on: on:
push: push:
branches: branches:
- master - master
jobs: jobs:
build: build:
name: Build and push image name: Build and push image
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-latest container: catthehacker/ubuntu:act-latest
if: gitea.ref == 'refs/heads/master' if: gitea.ref == 'refs/heads/master'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Create Kubeconfig - name: Create Kubeconfig
run: | run: |
mkdir $HOME/.kube mkdir $HOME/.kube
echo "${{ secrets.KUBEC_CONFIG_BUILDX_NEW }}" > $HOME/.kube/config echo "${{ secrets.KUBEC_CONFIG_BUILDX_NEW }}" > $HOME/.kube/config
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
driver: kubernetes driver: kubernetes
driver-opts: | driver-opts: |
namespace=gitea-runner namespace=gitea-runner
qemu.install=true qemu.install=true
- name: Login to Docker Registry - name: Login to Docker Registry
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
registry: git.aridgwayweb.com registry: git.aridgwayweb.com
username: armistace username: armistace
password: ${{ secrets.REG_PASSWORD }} password: ${{ secrets.REG_PASSWORD }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
push: true push: true
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
tags: | tags: |
git.aridgwayweb.com/armistace/blog:latest git.aridgwayweb.com/armistace/blog:latest
- name: Trivy Scan - name: Deploy
run: | run: |
echo "Installing Trivy " echo "Installing Kubectl"
sudo apt-get update apt-get update
sudo apt-get install -y wget apt-transport-https gnupg lsb-release apt-get install -y apt-transport-https ca-certificates curl gnupg
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo apt-get update echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get install -y trivy chmod 644 /etc/apt/sources.list.d/kubernetes.list
trivy image --format table --exit-code 1 --ignore-unfixed --vuln-type os,library --severity HIGH,CRITICAL git.aridgwayweb.com/armistace/blog:latest apt-get update
apt-get install kubectl
- name: Deploy kubectl delete namespace blog
run: | kubectl create namespace blog
echo "Installing Kubectl" kubectl create secret docker-registry regcred --docker-server=${{ vars.DOCKER_SERVER }} --docker-username=${{ vars.DOCKER_USERNAME }} --docker-password='${{ secrets.DOCKER_PASSWORD }}' --docker-email=${{ vars.DOCKER_EMAIL }} --namespace=blog
apt-get update kubectl apply -f kube/blog_pod.yaml && kubectl apply -f kube/blog_deployment.yaml && kubectl apply -f kube/blog_service.yaml
apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
chmod 644 /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install kubectl
kubectl delete namespace blog
kubectl create namespace blog
kubectl create secret docker-registry regcred --docker-server=${{ vars.DOCKER_SERVER }} --docker-username=${{ vars.DOCKER_USERNAME }} --docker-password='${{ secrets.DOCKER_PASSWORD }}' --docker-email=${{ vars.DOCKER_EMAIL }} --namespace=blog
kubectl apply -f kube/blog_pod.yaml && kubectl apply -f kube/blog_deployment.yaml && kubectl apply -f kube/blog_service.yaml

22
\
View File

@ -1,22 +0,0 @@
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = gitea@192.168.178.155:armistace/blog.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
[branch "kube_deployment"]
remote = origin
merge = refs/heads/kube_deployment
[branch "when_to_use_ai"]
remote = origin
merge = refs/heads/when_to_use_ai
[pull]
rebase = false
[branch "an_actual_solution_to_the_social_media_ban"]
remote = origin
merge = refs/heads/an_actual_solution_to_the_social_media_ban

View File

@ -1,52 +0,0 @@
Title: An Actual Solution to the Social Media Ban
Date: 2025-09-16 20:00
Modified: 2025-09-17 20:00
Category: Politics
Tags: politics, social meda, tech policy
Slug: actual-social-media-solution
Authors: Andrew Ridgway
Summary: The Social Media ban is an abject failure of policy. I propose an actual technical solution that addresses the issues raised by the legislation and also ensures user privacy and data security through an opt in solution.
## The Toothless Legislation
The Australian Government recently announced it would be “watering down” the requirements of the upcoming legislation regarding online safety. The irony isnt lost on anyone observing the situation. Specifically, the planned mandatory minimum “flag rate” for underage detection technology has been dropped a clear indication that initial testing proved unachievable. Furthermore, the legislation now only requires tech companies to demonstrate “reasonable steps” to remove children from their platforms.
Lets be frank: this legislation, as it stands, achieves very little. Experts in the field consistently warned that the proposed age verification approach was flawed and ignored industry input. The result? Parents are arguably in a worse position than before. The focus on punitive measures, rather than practical solutions, has been a misstep, and the relentless pursuit of this agenda by the eSafety Commissioner feels increasingly disconnected from reality.
Its important to state that criticism of this legislation isnt an endorsement of big tech, in fact Im actively working to reduce my own reliance on these platforms. It is about the Australian Government overreaching in an area where it lacks the necessary expertise and, frankly, the authority. The driving force behind this appears to be a personal vendetta, fuelled by someone unfamiliar with the fundamental principles of how the internet operates.
So, with the current legislation effectively neutered, what *can* the government do to genuinely help parents navigate the challenges of online safety? I believe theres a technically feasible solution that doesnt involve trampling on privacy or creating massive security vulnerabilities.
The answer lies in a system weve been using for decades: the Domain Name System (DNS). Simply put, DNS translates human-readable URLs like [https://blog.aridgwayweb.com](https://blog.aridgwayweb.com) into the corresponding IP address (e.g., x.x.x.x). Its a foundational component of the internet, and while seemingly simple, its incredibly powerful.
## What is DNS?
Most people rely on the DNS provided by their Internet Service Provider (ISP) or the manufacturer of their router. However, its possible to change this setting. Popular alternatives include Cloudflares 1.1.1.1, Googles 8.8.8.8, and paid family-friendly options like OpenDNS. For those with more technical expertise, its even possible to run your own DNS server I personally use Pi-hole to block ads at the network level.
This existing infrastructure offers a unique opportunity. The Chinese government has long leveraged DNS as part of its “Great Firewall,” demonstrating its capability for large-scale internet censorship and control. While that application raises obvious concerns, the underlying technology itself isnt inherently malicious and is a good fit for the purposes of *opt in* age verification.
<img alt="Current DNS" height="auto" width="100%" src="{attach}/images/dns_currently.png">
## How can we leverage DNS for age verification?
My proposal is straightforward: the Australian Government could establish a large-scale DNS server within the Communications Department. This server could be configured to redirect requests to specific websites like Facebook or TikTok to an internal service that requires some form of authentication or identity verification. Once verified, the request would then be forwarded to the correct IP address.
<img alt="Optional Government DNS" height="auto" width="100%" src="{attach}/images/optional_gov_dns.png">
This DNS server could be *optionally* configured on any router, with ISPs assisting less technically inclined customers. The result? Access to certain websites from that router would require passing through the governments age verification process.
The authentication could be managed by an adult in the household, providing a valid identity document to receive some form of auth mechanism (password? passkey? authenticator?) to allow the user to continue to their 'restricted' website.
Mobile phones could also have the internal DNS updated by manufacturers to incorporate this DNS setting.
This would allow for the creation of “Government-certified” or “Family-Friendly” devices routers or phones pre-configured with this DNS server ensuring a consistent level of online safety as defined by the Australian Government. These devices could be subsidised by the government to ensure accessibility for all families.
Crucially, this system is optional. Individuals who prefer to manage their own online security as I do would remain unaffected. However, for parents who lack the technical skills or desire to implement their own solutions, this offers a practical and effective alternative to managing their childs online safety.
This approach also avoids the need to collect and store sensitive identity data offshore. No tech company needs to be involved in the verification process, and the skills to build and maintain this system already exist within the Australian public service.
Furthermore, the eSafety Commissioner could easily update the list of websites subject to verification, providing a flexible and responsive system. It wouldnt cover the entire internet, of course, but it would provide a valuable safety net for those who need it.
## Where to from here?
Now that the government has acknowledged the shortcomings of its initial approach, its time to explore real solutions. A government-run, family-friendly DNS system that routes certain domain names to a verification process is a solid starting point for a genuinely effective technical solution to help families navigate the online world.

View File

@ -1,41 +0,0 @@
Title: Apple And The Anti-Dev Platform
Date: 2025-08-28 20:00
Modified: 2025-08-28 20:00
Category: Tech, Software, Apple
Tags: Tech, Software, Apple
Slug: apple-anti-dev
Authors: Andrew Ridgway
Summary: Apples requirements for developers are onerous, I detail some of the frustrations I've had whilst dealing with the platform to deploy a small app as part of my day job
## Introduction: Why I Hate Loving to Hate Apple
This week, I found myself in the unenviable position of using MacOS for work. It was like revisiting an old flame only to realize theyve become *that* person—still attractive from afar, but toxic up close. Let me clarify: Im not anti-Apple per se. I appreciate their design aesthetic as much as anyone. But when youre a developer, especially one with a penchant for Linux and a deep love for open-source, Apples ecosystem feels like walking into a store where the sign says "Employee Discounts" but they charge you double for the privilege.
## 1. The Hardware-Software Tie-In: Why Buy New Every Year?
Lets talk about my borrowed MacBook from 2020. It was a kind gesture, right? But heres the kicker: this machine, which was cutting-edge just five years ago, is now deemed too old to run the latest MacOS. I needed Xcode for a project, and guess what? You cant run the latest version of Xcode without the latest MacOS. So, to paraphrase: "Sorry, but your device isnt *new enough* to develop on the Apple platform anymore." This isnt just inconvenient; its a deliberate strategy to force upgrades. Its like buying a car that requires you to upgrade your entire garage every year just to keep it running.
## 2. Forced Obsolescence: The New "Upgrade" Cycle
Yes, Microsoft did the whole TPM 2.0 thing with Windows 11. But Apple takes it to another level. Theyve turned hardware into a subscription model without you even realizing it. You buy a device, and within a few years, its obsolete for their latest software and tools. This isnt about security or innovation—its about control. Why release an operating system that only works on devices sold in the last 12 months? It creates a false market for "new" hardware, padding Apples margins at the expense of developers and users.
## 3. High Costs: The Developer Fee That Keeps On Giving
I honestly believe this actually boils down to money? To develop on Apples platform, you need an Apple Developer account. This costs $150 AUD a year. Now, if I were to buy a new MacBook Pro today, that would set me back around $2,500 AUD. And for what? The privilege of being able to build apps on my own device? Its like paying a toll every year just to use the road you already own. Its enough to make you consider a career change and become a sheep farmer.
## 4. Lack of Freedom: Who Owns the Device Anyway?
Heres where it gets really egregious: Apples developer review process. Its like being subjected to a TSA pat-down every time you want to build something, even if it's just for your own device. To deploy ANYTHING onto an IOS device I need to hand my Government issued license over to Apple and let them "check I'm a real person". And no this isn't just for the app store deployments, which I can understand. This is for any deployment, it's the only way to get a certificate to cross sign on the app and device... Google might be heading down a similar path, but at least you'll be able to on custom Android ROmS. On Apple, it feels like every step is designed to remind you that youre dancing in their sandbox—and they call the shots. If you use IOS you have to dance to their tune AT ALL TIMES.
## 5. The "Apple Tax": A Future Job Requirement
I think all developers and consultants should demand an "Apple Tax." It will be simple:
* $5,000 AUD for new Apple hardware.
* An additional 25% markup on development hours spent navigating Apples ecosystem.
Why? Because it's time developers passed on these costs to the users. It's time to make this hurt the consumers who insist on using these products with predatory business models for developers. Yes, developers go where the market is, but it's time to start charging that market so it understands the true cost to be there.
## Conclusion: Why Ill Keep Hating Loving to Hate Apple
Apples ecosystem feels like a love story gone wrong—a relationship where one party keeps raising the stakes just to remind you of how much they control everything. Developers are supposed to be the disruptors, the rebels who challenge the status quo. But when your tools are designed to keep you tethered to a specific platform and its outdated business model, it feels less like innovation and more like indentured servitude. If youre still enamored with Apples ecosystem and think its “just part of the game,” I urge you to take a long, hard look in the mirror. Because if this is your idea of progress, were all in trouble.

View File

@ -1,188 +0,0 @@
Title: Designing and Building an AI Enhanced CCTV System
Date: 2026-02-02 20:00
Modified: 2026-02-03 20:00
Category: Homelab
Tags: proxmox, hardware, self host, homelab
Slug: ai-enhanced-cctv
Authors: Andrew Ridgway
Summary: Home CCTV Security has become a bastion cloud subscription awfulness. This blog describes the work involved in creating your own home grown AI enhanced CCTV system. Unfortunately what you save in subscription you lose in time but if you value privacy, it's worth it.
### Why Build Your Own AIEnhanced CCTV?
When you buy a consumergrade security camera, youre not just paying for the lens and the plastic housing. Youre also paying for a subscription that ships every frame of your backyard to a cloud service youll never meet. That data can be used to train models, sold to advertisers, or handed over to authorities on a whim. For many, the convenience outweighs the privacy cost, but for anyone who values control over their own footage, the tradeoff feels unacceptable.
The goal of this project was simple: **keep every byte of video onpremises, add a layer of artificial intelligence that makes the footage searchable and actionable, and do it all on a budget that wouldnt break the bank**. Over the past six months Ive iterated on a design that satisfies those constraints, and the result is a fully local, AIenhanced CCTV system that can tell you when a “red SUV” pulls into the driveway, or when a “dog wearing a bandana” wanders across the garden, without ever leaving the house.
---
### The Core Software Frigate
At the heart of the system sits **Frigate**, an opensource network video recorder (NVR) that runs in containers and is configured entirely via a single YAML file. The simplicity of the configuration is a breath of fresh air compared with the sprawling JSON or proprietary GUIs of many commercial solutions. A few key reasons Frigate became the obvious choice:
| Feature | Why It Matters |
|---------|----------------|
| **Containernative** | Deploys cleanly on Docker, Kubernetes, or a lightweight LXC. No hostlevel dependencies to wrestle with. |
| **YAMLdriven** | Humanreadable, versioncontrolled, and easy to replicate across test environments. |
| **Builtin object detection** | Supports car, person, animal, and motorbike detection out of the box, with the ability to plug in custom models. |
| **Extensible APIs** | Exposes detection events, snapshots, and stream metadata for downstream automation tools. |
| **GenAI integration** | Recent addition that lets you forward snapshots to a local LLM (via Ollama) for semantic enrichment. |
The documentation is thorough, and the community is active enough that most stumbling blocks are resolved within a few forum posts. Because the entire system is defined in a single YAML file, I can spin up a fresh test instance in minutes, tweak a cameras FFmpeg options, and see the impact without rebuilding the whole stack.
---
### Choosing the Cameras TPLink Vigi C540
A surveillance system is only as good as the lenses feeding it. I needed cameras that could:
1. Deliver a reliable RTSP stream (the lingua franca of NVRs).
2. Offer panandtilt so a single unit can cover a larger field of view.
3. Provide onboard human detection to reduce unnecessary bandwidth.
4. Remain affordable enough to allow for future expansion.
The **TPLink Vigi C540** checked all those boxes. Purchased during a Black Friday sale for roughly AUD50 each, the three units I started with have proven surprisingly capable:
- **Pan/Tilt** Allows a single camera to sweep a driveway or front porch, reducing the number of physical devices needed.
- **Onboard human detection** The camera can flag a person locally, which helps keep the upstream bandwidth low when the NVR is busy processing other streams.
- **RTSP output** Perfectly compatible with Frigates ingest pipeline.
- **No zoom** A minor limitation, but the field of view is wide enough for my modest property.
The cameras are wired via Ethernet, a decision driven by reliability concerns. Wireless links are prone to interference, especially when the cameras are placed near metal roofs or dense foliage. Running Ethernet required a bit of roof work (more on that later), but the resulting stable connection has paid dividends in stream consistency.
---
### The Host Machine A Budget Dell Workstation
All the AI magic lives on a modest **Dell OptiPlex 7050 SFF** that I rescued for $150. Its specifications are:
- **CPU:** Intel i57500 (4 cores, 3.4GHz)
- **RAM:** 16GB DDR4
- **Storage:** 256GB SSD for the OS and containers, 2TB HDD for video archives
- **GPU:** Integrated Intel HD Graphics 630 (no dedicated accelerator)
Despite lacking a powerful discrete GPU, the workstation runs Frigates **OpenVINO**based SSDLite MobileNetV2 detector comfortably. The model is small enough to execute on the integrated graphics, keeping inference latency low enough for realtime alerts. CPU utilization hovers around 7080% under typical load, which is high but acceptable for a home lab. The system does run warm, so Ive added a couple of case fans to keep temperatures in the safe zone.
The storage layout is intentional: the SSD hosts the OS, Docker engine, and Frigate container, ensuring fast boot and container start times. The 2TB HDD stores raw video, detection clips, and alert snapshots. With the current retention policy (7days of full footage, 14days of detection clips, 30days of alerts) the drive is comfortably sized, though I plan to monitor usage as I add more cameras.
---
### Wiring It All Together Proxmox and Docker LXC
To keep the environment tidy and reproducible, I run the entire stack inside a **Proxmox VE** cluster. A dedicated node hosts a **Dockerenabled LXC container** that isolates the NVR from the rest of the homelab. This approach offers several benefits:
- **Resource isolation** CPU and memory limits can be applied per container, preventing a runaway process from starving other services.
- **Snapshotready** Proxmox can snapshot the whole VM, giving me a quick rollback point if a configuration change breaks something.
- **Portability** The LXC definition can be exported and reimported on any other Proxmox host, making disaster recovery straightforward.
Inside the container, Docker orchestrates the Frigate service, an Ollama server (hosting the LLM models), and a lightweight reverse proxy for HTTPS termination. All traffic stays within the local network; the only external connections are occasional model downloads from Hugging Face and the occasional software update.
---
### From Detection to Context The Ollama Integration
Frigates native object detection tells you *what* it sees (e.g., “person”, “car”, “dog”). To turn that into *meaningful* information, I added a **GenAI** layer using **Ollama**, a selfhosted LLM runtime that can serve visioncapable models locally.
The workflow is as follows:
1. **Frigate detects an object** and captures a snapshot of the frame.
2. The snapshot is sent to **Ollama** running the `qwen3vl4b` model, which performs **semantic analysis**. The model returns a textual description such as “a white ute with a surfboard on the roof”.
3. Frigate stores this enriched metadata alongside the detection event.
4. When a user searches the Frigate UI for “white ute”, the system can match the description generated by the LLM, dramatically narrowing the result set.
5. For realtime alerts, a smaller model (`qwen3vl2b`) is invoked to generate a concise, humanreadable sentence that is then forwarded to Home Assistant.
Because the LLM runs locally, there is no latency penalty associated with roundtrip internet calls, and privacy is preserved. The only external dependency is the occasional model pull from Hugging Face during the initial setup or when a newer version is released.
---
### Home Assistant The Glue That Binds
While Frigate handles video ingestion and object detection, **Home Assistant** provides the automation backbone. By integrating Frigates webhook events into Home Assistant, I can:
- **Trigger notifications** via Matrix when a detection meets certain criteria.
- **Run conditional logic** to decide whether an alert is worth sending (e.g., ignore cars on the street but flag a delivery van stopping at the gate).
- **Log events** into a timeseries database for later analysis.
- **Expose the enriched metadata** to any other smarthome component that might benefit from it (e.g., turning on porch lights when a person is detected after dark).
The Home Assistant configuration lives in its own YAML file, mirroring the philosophy of “infrastructure as code”. This makes it easy to versioncontrol the automation logic alongside the NVR configuration.
---
### Semantic Search Finding a Needle in a Haystack
One of the most satisfying features of the system is the ability to **search footage using natural language**. Traditional NVRs only let you filter by timestamps or simple motion events. With the GenAIenhanced metadata, the search bar becomes a powerful query engine:
- Typing “red SUV” returns all clips where the LLM described a vehicle as red and an SUV.
- Searching “dog with a bandana” surfaces the few moments a neighbours pet decided to wear a fashion accessory.
- Combining terms (“white ute with surfboard”) narrows the results to a single delivery that happened last weekend.
Under the hood, the search is a straightforward text match against the stored descriptions, but the quality of those descriptions hinges on the LLM prompts. Finetuning the prompts has been an ongoing task, as the initial attempts produced generic phrases like “a vehicle” that were not useful for filtering.
---
### Managing Storage and Retention
Video data is notoriously storagehungry. To keep the system sustainable, I adopted a tiered retention policy:
| Data Type | Retention | Approx. Size (4 cameras) |
|------------|-----------|--------------------------|
| Full video (raw RTSP) | 7days | ~1.2TB |
| Detection clips (30s each) | 14days | ~300GB |
| Alert snapshots (highres) | 30days | ~150GB |
The SSD holds the operating system and container images, while the HDD stores the bulk of the video. When the HDD approaches capacity, a simple cron job rotates out the oldest files, ensuring the system never runs out of space. In practice, the 2TB drive has been more than sufficient for the current camera count, but I have a spare 4TB drive on standby for future expansion.
---
### Lessons Learned The Good, the Bad, and the Ugly
#### 1. **Performance Is a Balancing Act**
Running inference on an integrated GPU is feasible, but the CPU load remains high. Adding a modest NVIDIA GTX1650 would drop CPU usage dramatically and free headroom for additional cameras or more complex models.
#### 2. **Prompt Engineering Is Real Work**
The LLMs output quality is directly tied to the prompt. Early attempts used a single sentence like “Describe the scene,” which resulted in vague answers. Iterating on a multistep prompt that asks the model to list objects, colors, and actions has produced far richer metadata.
#### 3. **Notification Fatigue Is Real**
Initially, every detection triggered a push notification, flooding my phone with alerts for passing cars and stray cats. By adding a simple confidence threshold and a “timeofday” filter in Home Assistant, I reduced noise by 80%.
#### 4. **Network Stability Matters**
Wired Ethernet eliminated the jitter that plagued my early WiFi experiments. The only hiccup was a miswired patch panel that caused occasional packet loss; a quick audit resolved the issue.
#### 5. **Documentation Pays Off**
Because Frigates configuration is YAMLbased, I could versioncontrol the entire stack in a Git repository. When a change broke the FFmpeg pipeline, a `git revert` restored the previous working state in minutes.
---
### Future Enhancements Where to Go From Here
- **GPU Upgrade** Adding a dedicated inference accelerator (e.g., an Intel Arc or NVIDIA RTX) to improve detection speed and lower CPU load.
- **Dynamic Prompt Generation** Using a small LLM to craft contextaware prompts based on the time of day, weather, or known events (e.g., “delivery” vs. “visitor”).
- **Smart Notification Decision Engine** Training a lightweight classifier that decides whether an alert is worth sending, based on historical user feedback.
- **EdgeOnly Model Updates** Caching Hugging Face models locally and scheduling updates during offpeak hours to eliminate any internet dependency after the initial download.
- **MultiCamera Correlation** Linking detections across cameras to track a moving object through the property, enabling a “followtheintruder” view.
---
### A Personal Note The Roof, the Cables, and My Dad
All the technical wizardry would have been for naught if I hadnt managed to get Ethernet cables from the houses main distribution board up to the roof where the cameras sit. Im decent with Docker, YAML, and LLM prompts, but Im hopeless when it comes to climbing ladders and threading cables through roof joists.
Enter my dad. He spent an entire Saturday hauling a coil of Cat6, pulling the cables into the roof space while I fumbled with the tools. He didnt care that Id rather be writing code than wielding a hammer; There were apparently 4 days of pain afterwards so please know the help was truly appreciated. The result is a rocksolid wired backbone that keeps the cameras streaming without hiccups.
Thank you, Dad. Your patience, muscle, and willingness to get your hands dirty made this whole system possible.
---
### Bringing It All Together The Architecture
<img alt="CCTV Architecture" height="auto" width="100%" src="{attach}/images/CCTV_ARCH.png">
---
### Closing Thoughts
Building an AIenhanced CCTV system from the ground up has been a rewarding blend of hardware tinkering, software orchestration, and a dash of machinelearning experimentation. The result is a **privacyfirst, locally owned surveillance platform** that does more than just record—it understands. It can answer naturallanguage queries, send contextrich alerts, and integrate seamlessly with a broader homeautomation ecosystem.
If youre a hobbyist, a smallbusiness owner, or anyone who values data sovereignty, the stack described here offers a solid foundation. Start with a single camera, get comfortable with Frigates YAML configuration, and gradually layer on the AI components. Remember that the most valuable part of the journey is the learning curve: each tweak teaches you something new about video streaming, inference workloads, and the quirks of your own network.
So, roll up your sleeves, grab a ladder (or enlist a dad), and give your home the eyes it deserves—without handing the footage over to a faceless cloud. The future of home surveillance is local, intelligent, and, most importantly, under your control. Cheers!

View File

@ -1,31 +0,0 @@
Title: Google AI is Rising
Date: 2025-12-21 20:00
Modified: 2025-12-23 10:00
Category: AI
Tags: AI, Google, Tech
Slug: google-ai-is-rising
Authors: Andrew Ridgway
Summary: After a period of seeming hesitation, one tech giant is now a serious contender in the AI race. Leveraging its massive and uniquely personal datasets gleaned from widely used services like search, email, and calendars its releasing models that are quickly challenging existing benchmarks. This arrival is significant, creating a more competitive landscape and potentially pushing innovation forward. However, it also highlights crucial privacy concerns given the depth of data access. The companys recent open-source contributions suggest a multifaceted approach, but users should be mindful of data control and consider diversifying their digital footprint.
# Google AI is Rising
The landscape of Artificial Intelligence is shifting, and a familiar name is finally asserting its dominance. For a while there, it felt like Google was… well, lagging. Given the sheer volume of data at its disposal, it was a surprise to many that they werent leading the charge in Large Language Models (LLMs). But the moment appears to have arrived. Google seems to have navigated its internal complexities and is now delivering models that are genuinely competitive, and in some cases, surpassing the current benchmarks.
The key to understanding Googles potential lies in the data theyve accumulated. Consider the services we willingly integrate into our daily lives: email through Gmail, scheduling with Google Calendar, advertising interactions, and of course, the ubiquitous Google Search. Crucially, we provide this data willingly, often tied to a single Google account. This isnt just a large dataset; its a *targeted* dataset, offering an unprecedented level of insight into individual behaviours and preferences.
This data advantage is now manifesting in the performance of Gemini, Googles latest LLM. Recent discussions within the tech community on platforms like [Hacker News](https://news.ycombinator.com/item?id=46301851) and [Reddit](https://www.reddit.com/r/singularity/comments/1p8sd2g/experiences_with_chatgpt51_vs_gemini_3_pro/) and [Reddit](https://www.reddit.com/r/GeminiAI/comments/1p953al/gemini_seems_to_officially_be_better_than_chatgpt/) suggest Gemini is rapidly gaining ground, and in some instances, exceeding the capabilities of established models.
Googles history is one of immense scale and profitability, exceeding the GDP of many nations. This success, however, has inevitably led to the creation of large, protective bureaucracies. While necessary for safeguarding revenue streams, these structures can stifle innovation and slow down decision-making. Ideas often have to navigate multiple layers of management, sometimes overseen by individuals whose expertise lies in business administration rather than the intricacies of neural networks and algorithmic functions.
The arrival of a truly competitive Google model is a significant development. OpenAI, previously considered the frontrunner, now faces a formidable challenge. Furthermore, Anthropic is gaining traction amongst developers, with many preferring their models for coding assistance. This shift suggests a growing demand for tools tailored to specific professional needs.
Its important to acknowledge that neither Google nor OpenAI are inherently benevolent entities. However, with Google now fully engaged in the LLM race, the potential implications are considerable. Geminis access to deeply personal data email content, calendar events, even metadata raises legitimate privacy concerns. Its a sobering thought to consider the extent of data visibility Google possesses, particularly when we dont directly own the services we use. This reality strengthens the argument for greater data control and the exploration of self-hosted alternatives.
Googles commitment to open-source initiatives, demonstrated through the release of the Gemma models (which, incidentally, powered the creation of this very blog), signals a broader strategy. The technology is here, its evolving rapidly, and its influence will only continue to grow.
While complete resistance may be unrealistic, individuals can take steps to mitigate potential risks. Fragmenting your data across different services, diversifying email providers, and avoiding single sign-on (SSO) with Google are all proactive measures that can help reclaim a sense of control. (Though, lets be honest, anyone still using Chrome is already operating within a highly monitored ecosystem.)
The future of AI is unfolding quickly, and Google is now a major player. Its a development that warrants careful consideration, and a renewed focus on data privacy and digital autonomy.

View File

@ -3,13 +3,12 @@ Date: 2025-08-12 20:00
Modified: 2025-08-14 20:00 Modified: 2025-08-14 20:00
Category: Politics, Tech, AI Category: Politics, Tech, AI
Tags: politics, tech, Ai Tags: politics, tech, Ai
Slug: gpt-oss-eee Slug: social-media-ban-fail
Authors: Andrew Ridgway Authors: Andrew Ridgway
Summary: GPT OSS is here from Open AI, the first open weight model from them since GPT-2. My question is... why now? Summary: GPT OSS is here from Open AI, the first open weight model from them since GPT-2. My question is... why now?
# Human Introduction # Human Introduction
This has been a tough one for the publishing house to get right. I've had it generate 3 different drafts and this is still the result of quite the edit. Today's blog was written by: This has been a tough one for the publishing house to get right. I've had it generate 3 different drafts and this is still the result of quite the edit. Today's blog was written by:
1. Gemma:27b - Editor 1. Gemma:27b - Editor
2. GPT-OSS - Journalist 2. GPT-OSS - Journalist
3. Qwen3:14b - Journalist 3. Qwen3:14b - Journalist
@ -54,7 +53,7 @@ Now, Im not accusing OpenAI of anything here—just pointing out that they
* OpenAI has dominated the consumer AI market with their **ChatGPT** and other tools. * OpenAI has dominated the consumer AI market with their **ChatGPT** and other tools.
* Theyve been losing ground in the developer market, where models like [Gemini](https://deepmind.google/models/gemini/pro/) and particularly [Claude (Anthropic)](https://claude.ai/) are gaining traction in the proprietary space. * Theyve been losing ground in the developer market, where models like [Gemini](https://deepmind.google/models/gemini/pro/) and particularly [Claude (Anthropic)](https://claude.ai/) are gaining traction in the proprietary space.
* Now theyre releasing open weight models that promise to compete at GPT-4 levels to try and bring in the Deepseek and Qwen crowd. * Now theyre releasing open-source models that promise to compete at GPT-4 levels to try and bring in the Deepseek and Qwen crowd.
The timing feels a bit too convenient. OpenAI is essentially saying: “We get it. You want local, affordable, and flexible AI? Weve got you covered.” But will this be enough to win back the developer community? Or are they just delaying the inevitable? The timing feels a bit too convenient. OpenAI is essentially saying: “We get it. You want local, affordable, and flexible AI? Weve got you covered.” But will this be enough to win back the developer community? Or are they just delaying the inevitable?

Binary file not shown.

Before

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB