Compare commits

...

56 Commits

Author SHA1 Message Date
2905c2917f Update src/content/when_to_use_ai.md
All checks were successful
Build and Push Image / Build and push image (push) Successful in 18m18s
2025-06-06 13:33:36 +10:00
ec0798dff2 Update for clarity of the AI response
Some checks failed
Build and Push Image / Build and push image (push) Has been cancelled
2025-06-06 13:32:19 +10:00
f41dbc802e Merge pull request 'when_to_use_ai' (#12) from when_to_use_ai into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 11m27s
Reviewed-on: #12
2025-06-06 12:31:58 +10:00
e7f996306f Human edit and update 2025-06-06 12:30:42 +10:00
Blog Creator
22c521293f 'Clarify AI usage scenarios effectively.' 2025-06-06 02:28:07 +00:00
Blog Creator
74f8c8607b 'Add clarity on AI limitations.' 2025-06-06 01:57:04 +00:00
8c342d994e fix conflicts 2025-06-06 11:32:25 +10:00
Blog Creator
bcc41ed4c3 '```
git commit -m "AI: Know when to use it"
```

**Explanation of the commit message:**

*   **Concise:** It's short and to the point, adhering to the common 5-word limit.
*   **Descriptive:** It accurately reflects the content of the blog post – a discussion about when to utilize AI.
*   **Action-oriented:**  Implies a need for thoughtful consideration.
'
2025-06-06 11:30:38 +10:00
Blog Creator
fc48d9a4fd '```
feat: Add blog post on AI usage scenarios
```'
2025-06-06 11:30:03 +10:00
Blog Creator
191805461d '```
feat: Add blog post on AI usage scenarios
```'
2025-06-06 11:29:09 +10:00
Blog Creator
c2142f7f63 '```
git commit -m "AI: Know when to use it"
```

**Explanation of the commit message:**

*   **Concise:** It's short and to the point, adhering to the common 5-word limit.
*   **Descriptive:** It accurately reflects the content of the blog post – a discussion about when to utilize AI.
*   **Action-oriented:**  Implies a need for thoughtful consideration.
'
2025-06-06 11:28:17 +10:00
Blog Creator
9c92d19943 '```
feat: Add blog post on AI usage scenarios
```'
2025-06-06 11:27:23 +10:00
aead9151c0 Update src/content/when_to_use_ai.md 2025-06-06 11:26:13 +10:00
14636a4d3b update so pelican can use - must remember 2025-06-06 11:26:12 +10:00
d6d6f1893e stray w 2025-06-06 11:25:43 +10:00
abf6f3feda fix push settings 2025-06-06 11:25:43 +10:00
7a4d06bbef cleanup of kube stuff 2025-06-06 11:25:43 +10:00
efad73a1a4 wrap the password 2025-06-06 11:25:43 +10:00
50af2eccb3 kube pipeline manual step and kubectl yaml 2025-06-06 11:25:43 +10:00
e9a7daf5b3 Change workflow push to master only 2025-06-06 11:25:43 +10:00
eac2f648c0 Update src/content/when_to_use_ai.md
All checks were successful
Build and Push Image / Build and push image (push) Successful in 7m30s
2025-06-06 10:29:51 +10:00
79d53d3462 update so pelican can use - must remember
All checks were successful
Build and Push Image / Build and push image (push) Successful in 7m23s
2025-06-06 10:17:13 +10:00
6dd6e92d93 Merge pull request 'stray -' (#11) from kube_deployment into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 7m41s
Reviewed-on: #11
2025-06-06 10:00:37 +10:00
3db9f63246 stray w 2025-06-06 09:59:47 +10:00
6552ba503b Merge pull request 'fix push settings' (#10) from kube_deployment into master
Some checks failed
Build and Push Image / Build and push image (push) Failing after 6m1s
Reviewed-on: #10
2025-06-06 09:53:14 +10:00
202c787f19 fix push settings 2025-06-06 09:52:38 +10:00
61324ff499 Merge pull request 'kube_deployment' (#9) from kube_deployment into master
Some checks failed
Build and Push Image / Build and push image (push) Failing after 9m24s
Reviewed-on: #9
2025-06-06 09:32:18 +10:00
859c40c55c cleanup of kube stuff 2025-06-06 09:31:43 +10:00
92dd043b35 wrap the password 2025-06-06 09:29:24 +10:00
761265a467 kube pipeline manual step and kubectl yaml 2025-06-06 09:25:31 +10:00
da1bfd4779 Change workflow push to master only
All checks were successful
Build and Push Image / Build and push image (push) Successful in 10m7s
2025-06-04 20:10:41 +10:00
Blog Creator
1f4b7100bc '```
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
git commit -m "AI: Know when to use it"
```

**Explanation of the commit message:**

*   **Concise:** It's short and to the point, adhering to the common 5-word limit.
*   **Descriptive:** It accurately reflects the content of the blog post – a discussion about when to utilize AI.
*   **Action-oriented:**  Implies a need for thoughtful consideration.
'
2025-05-30 07:43:58 +00:00
Blog Creator
3ed8f91303 '```
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
feat: Add blog post on AI usage scenarios
```'
2025-05-30 07:18:43 +00:00
49167ee308 Merge pull request 'when_to_use_ai' (#8) from when_to_use_ai into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 5m59s
Reviewed-on: #8
2025-05-30 16:28:20 +10:00
b9210910f5 Had to make a new one and this looked better
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
2025-05-30 16:27:36 +10:00
Blog Creator
040d90ce73 '```git
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
git commit -m "AI: Know when to apply it"
```

**Explanation of the commit message:**

*   **Concise:** It's short and to the point, fitting within the recommended 50-character limit.
*   **Descriptive:** It accurately reflects the content's focus on appropriate AI usage.
*   **Action-oriented:**  "Know when to apply it" suggests a key takeaway for the reader.
'
2025-05-30 06:20:58 +00:00
70c1dfdbb2 Merge pull request 'when_to_use_ai' (#7) from when_to_use_ai into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 9m32s
Reviewed-on: #7
2025-05-30 15:17:31 +10:00
f5b370e048 update for formatting error
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
2025-05-30 15:16:26 +10:00
678d7f4308 Added human intro to "when to use ai"
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
2025-05-30 15:15:35 +10:00
Blog Creator
f3582e5881 '```git
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
git commit -m "Analyze AI use cases and limitations"
```
'
2025-05-30 05:08:43 +00:00
Blog Creator
74fb66d81e '```
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
When to use #AI carefully
```'
2025-05-30 04:55:07 +00:00
Blog Creator
874df3c8c3 '```
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
Add blog post on AI usage scenarios
```'
2025-05-30 04:46:10 +00:00
Blog Creator
2280630149 'Sure, here's your requested 5-word commit message for the blog post:
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
"AI vs Traditional: When & Why?"'
2025-05-30 04:30:27 +00:00
Blog Creator
9b440b775a '# Commit Message
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
When to use AI: Structured Tasks vs Complex Decisions 🤖🔍📊

<|end_of_solution|>'
2025-05-30 01:03:18 +00:00
Blog Creator
1de70f3e48 '**Commit Message:**
Some checks failed
Build and Push Image / Build and push image (push) Has been cancelled
When to use AI: Exploring Scenarios Where Human Expertise Still Shines Over LLMs 🚀

This commit refines the blog post on determining when artificial intelligence is appropriate, distinguishing between tasks where AI excels (text analysis, data patterns) and those requiring human precision (calculations, validation). The content emphasizes collaboration between AI and humans, using relatable examples like spreadsheet challenges and humorous analogies. Adjustments include clearer headings, concise paragraphs, and maintaining readability through short sentences. Humor is preserved to engage the audience effectively. 🚀

**Changes Made:**
- Updated "shudders" project explanation for clarity.
- Enhanced precision in AI limitations (mathematical accuracy vs. LLMs).
- Streamlined text interpretation triumph section with examples.
- Adjusted data cleaning conundrum to focus on preprocessing nuances.
- Finalized bottom line with actionable guidelines and closing humor.

**Next Steps:**
- Review for technical accuracy and ensure alignment with latest AI trends.
- Incorporate reader feedback from initial drafts into the final post.
- Optimize SEO keywords related to AI use cases for broader reach.

**Commit Notes:**
This commit aims to educate readers on practical AI adoption by highlighting both its strengths and limitations through relatable scenarios, fostering a balanced perspective on technology integration in professional settings. 🚀

<|end_of_solution|>'
2025-05-30 00:34:53 +00:00
8f50570084 Human edit to AI written draft
Some checks are pending
Build and Push Image / Build and push image (push) Waiting to run
game some context.. also was a bit of a mistake I think
2025-05-30 10:22:08 +10:00
Blog Creator
57502673de '# Commit Message: When to use AI - Fuzzy Logic & Context vs Precision Tasks
Some checks failed
Build and Push Image / Build and push image (push) Has been cancelled
This commit adds a detailed blog post on determining when to employ AI, focusing on scenarios where AI excels (fuzzy matching, NLP) versus situations requiring human oversight (precision tasks). The content includes practical examples like spreadsheet mapping and report automation, emphasizing the balance between AI and traditional methods. Key points highlight AI's strengths in context understanding while stressing manual checks for accuracy.

**Changes Made:**
- Expanded explanations with relatable examples.
- Integrated humor to engage readers.
- Structured scenarios clearly (work orders, reports) for clarity.

<|end_of_solution|>'
2025-05-29 23:52:25 +00:00
aeb05e6df4 Merge pull request 'production typos' (#6) from first_ai_post into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 6m57s
Reviewed-on: #6
2025-01-21 21:32:12 +10:00
=
28595ead23 production typos
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
2025-01-21 22:31:27 +11:00
515fd10869 Merge pull request 'updates with deepseek' (#5) from first_ai_post into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 6m29s
Reviewed-on: #5
2025-01-21 21:11:37 +10:00
=
72e1e7f12a updates with deepseek
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
2025-01-21 22:10:41 +11:00
b7097527e5 Merge pull request 'first ai blog post...sort of' (#4) from first_ai_post into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 6m36s
Reviewed-on: #4
2025-01-21 17:41:09 +10:00
=
c27ef8e638 first ai blog post...sort of
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped
2025-01-21 18:40:01 +11:00
d6789a84f1 update base.html to use https
All checks were successful
Build and Push Image / Build and push image (push) Successful in 6m8s
2025-01-21 15:45:46 +10:00
0cbc30050c Merge pull request 'fix-socials' (#3) from fix-socials into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 10m5s
Reviewed-on: #3
2025-01-21 14:14:24 +10:00
5e343f38bd Merge pull request 'changes to template and devconf for socials fix' (#2) from fix-socials into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 8m49s
Reviewed-on: #2
2025-01-21 11:51:04 +10:00
7 changed files with 450 additions and 39 deletions

View File

@ -1,42 +1,61 @@
name: Build and Push Image name: Build and Push Image
on: [ push ] on:
push:
branches:
- master
jobs: jobs:
build: build:
name: Build and push image name: Build and push image
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-latest container: catthehacker/ubuntu:act-latest
if: gitea.ref == 'refs/heads/master' if: gitea.ref == 'refs/heads/master'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Create Kubeconfig - name: Create Kubeconfig
run: | run: |
mkdir $HOME/.kube mkdir $HOME/.kube
echo "${{ secrets.KUBEC_CONFIG_BUILDX }}" > $HOME/.kube/config echo "${{ secrets.KUBEC_CONFIG_BUILDX }}" > $HOME/.kube/config
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
with: with:
driver: kubernetes driver: kubernetes
driver-opts: | driver-opts: |
namespace=gitea-runner namespace=gitea-runner
qemu.install=true qemu.install=true
- name: Login to Docker Registry - name: Login to Docker Registry
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
registry: git.aridgwayweb.com registry: git.aridgwayweb.com
username: armistace username: armistace
password: ${{ secrets.REG_PASSWORD }} password: ${{ secrets.REG_PASSWORD }}
- name: Build and push - name: Build and push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
push: true push: true
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
tags: | tags: |
git.aridgwayweb.com/armistace/blog:latest git.aridgwayweb.com/armistace/blog:latest
- name: Deploy
run: |
echo "Installing Kubectl"
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
chmod 644 /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install kubectl
kubectl delete namespace blog
kubectl create namespace blog
kubectl create secret docker-registry regcred --docker-server=${{ vars.DOCKER_SERVER }} --docker-username=${{ vars.DOCKER_USERNAME }} --docker-password='${{ secrets.DOCKER_PASSWORD }}' --docker-email=${{ vars.DOCKER_EMAIL }} --namespace=blog
kubectl apply -f kube/blog_pod.yaml && kubectl apply -f kube/blog_deployment.yaml && kubectl apply -f kube/blog_service.yaml

24
kube/blog_deployment.yaml Normal file
View File

@ -0,0 +1,24 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-deployment
labels:
app: blog
namespace: blog
spec:
replicas: 3
selector:
matchLabels:
app: blog
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: git.aridgwayweb.com/armistace/blog:latest
ports:
- containerPort: 8000
imagePullSecrets:
- name: regcred

13
kube/blog_pod.yaml Normal file
View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: blog
namespace: blog
spec:
containers:
- name: blog
image: git.aridgwayweb.com/armistace/blog:latest
ports:
- containerPort: 8000
imagePullSecrets:
- name: regcred

13
kube/blog_service.yaml Normal file
View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: blog-service
namespace: blog
spec:
type: NodePort
selector:
app: blog
ports:
- port: 80
targetPort: 8000
nodePort: 30009

View File

@ -0,0 +1,260 @@
Title: The Start Of An AI Blogger
Date: 2025-01-21 20:00
Modified: 2024-01-25 18:00
Category: AI, Data
Tags: ai, python
Slug: ai-blog-begginning
Authors: Andrew Ridgway
Summary: What if I had AI write the content from the summary?
#### Summary written by a human
As mentioned in the last post I have been experimenting with AI content generation. Over the summer I have managed to establish a connection between trilium and ollama. This connection is fascilitated by python and enables to write a quick summary of ideas for a post and have ollama generate a blog off it.
It's still not in the state I want it to be and isn't connecting to the actual blog repo yet (the idea being that I edit and change as part of a review process). But it is generating stuff that I thought would be worth a share just for fun. The eventual idea is the container will be something I trigger as part of a CI/CD process when the equipment is up and running but the final "production" implementation is still being fleshed out in my head (if you can call a homelab project "production")
The focus to this point has been on prompt engineering and model selection. A big part of this is that it needs to be able to run completely independently of any cloud services so no Chat GPT.
The obvious solution is [ollama](https://ollama.com) I'm lucky enough to have a modest secondary gaming rig in my living room with an nvidia 2060 in it that can act as a modest AI server so I have set it up there.
This server has allowed me to experiment, almost at will, with models. a few I tried included
- llama3.2
- zephyr
- gemma
- openchat
- falcon
However I have found that mistral-nemo seems to be the strongest contender, at least on my... modest... hardware.
You can see the code and what I have been working on for more details [HERE](https://git.aridgwayweb.com/armistace/blog_creator)
#### The summary prompt used by mistral to generate this post
_The following is what I have in trilium to generate the AI written content_
Get a python file that can communicate with a local ollama instance
Get a python file that can connect to trilium for structured notes as prompts
https://github.com/nriver/trilium-py
Get the python file to create blog entry in my blog repo
It should create a branch to do this and put the PR in, I need to approve the content
at least at this stage
https://git.aridgwayweb.com/blog
Send a notification to matrix that there is a new PR in and give a summary of the blog (20 words)
_as you can see it pretty light on content so what it generates given this lack of context I feel is pretty good_
# Let the post begin (begin...begin...begin...)
_What follows is what it generated.. yes it is hilarious but well... it was too good not to share_
# G'day, mates! Let's make an Ollama Blog Writer!
Ever felt like you've got more ideas than time to write 'em up? Well, strap on your Akubras, we're gonna make a little automation magic with Ollama and Trilium. Buckle up, Sheila!
## Step 1: Chat with Ollama
First off, grab yourself a Python file that can have a yarn with your local Ollama instance. It's like giving it a mate to chat with while you kick back with a VB.
```python
# olly.py
import requests
import json
def chat_with_olly(prompt):
url = 'http://localhost:7860/api/generate'
data = {
"prompt": prompt,
"model": "ollama/llama",
"options": {"max_new_tokens": 512}
}
response = requests.post(url, json=data)
return response.json()['choices'][0]['text']
```
## Step 2: Trilium Notes
Now, we need to grab some notes from Trilium for our prompts. Thankfully, there's a Python lib for that too.
```python
# trilium.py
import requests
def get_notes():
url = 'https://your-trilium-instance.com/api/notes'
headers = {'Authorization': 'Bearer your-token-here'}
response = requests.get(url, headers=headers)
return response.json()
```
## Step 3: Blog Entry Creation
Alright, now we're gonna make that Python file create a blog entry in our repo. It'll whip up a branch, put the PR in, and you can give it the tick of approval.
```python
# blog.py
import os
import subprocess
def create_blog_entry(title, content):
# Create a new branch for the blog post
branch_name = f"new-blog-{title}"
subprocess.run(["git", "checkout", "-b", branch_name])
# Create the markdown file with the title and content
with open(f"{branch_name}.md", "w") as f:
f.write(f"# {title}\n\n{content}")
# Add, commit, and push the changes to create a PR
subprocess.run(["git", "add", "-A"])
subprocess.run(["git", "commit", "-m", f"Add new blog entry: {title}"])
subprocess.run(["git", "push", "origin", branch_name])
```
## Step 4: Matrix Notification
Lastly, we want to send a message to our Matrix room letting us know there's a new PR waiting for approval. Here's how you can do it:
```python
# matrix.py
import requests
def send_matrix_message(room_id, message):
url = f'http://matrix-your-instance.com/_matrix/client/r0/rooms/{room_id}/sendMessage'
data = {'body': message}
headers = {'Authorization': 'Bearer your-access-token-here'}
response = requests.post(url, json=data, headers=headers)
```
And that's a wrap! Now you've got yourself an Ollama Blog Writer. Just feed it some Trilium notes and let it do its thing while you put your feet up with a cold one.
Cheers, and happy coding! 🍺🐍🤖
#### In Summary... again written by a human
So... as you can see its magical (please note any code in that post has not been vetted, use at your own risk... its 'raw' ai content). For a bit of context though this is the current prompt which likely needs tuning
```
You are a Software Developer and DevOps expert
who has transistioned in Developer Relations
writing a 1000 word blog for other tech enthusiast.
You like to use almost no code examples and prefer to talk
in a light comedic tone. You are also Australian
As this person write this blog as a markdown document.
The title for the blog is {self.title}.
Do not output the title in the markdown.
The basis for the content of the blog is:
{self.content}
Only output markdown DO NOT GENERATE AN EXPLANATION
```
But its safe to say its still producing some hilarious results that are still somewhat interesting
I'll keep tinkering away in the few moments I get to myself but this has been fun
# _UPDATE UPDATE UPDATE_
Ok so just before posting this I read about a new model supposedly on par with chatgpt o1... enter [Deepseek-r1](https://ollama.com/library/deepseek-r1)
Below is what it wrote
# Start Deepseek Blog Post
```<think>```
Alright, I've got this query from someone who wants to create an Ollama Blog Writer using Python. Let me break down what they're asking for.
First off, they mention needing a Python file that can communicate with a local Ollama instance. So, I should look into existing libraries or tools that enable communication with Ollama. The user is also interested in integrating Trilium for structured notes as prompts. They've provided a link to the trilium-py GitHub repository, which seems like a good starting point.
Next, their goal is to create a blog entry through their GitLab repo by making a branch and submitting a pull request. They want the PR content approved before proceeding further. That suggests they need guidance on structuring this part of their project, possibly including how to implement the API calls for both Ollama and Trilium.
The user also wants to send a notification to their matrix account about the new PR and provide a 20-word summary. This means I'll need to help them craft a concise message that includes these elements.
Additionally, they specified using almost no code examples in their blog post and adopting a light, comedic tone. They mentioned being Australian, so incorporating some local humor could make the content more relatable.
I should ensure that the markdown is properly formatted without any code blocks or markdown language. The title isn't to be included in the output, so I'll focus solely on the content of the blog post.
Putting it all together, the structure will start with an introduction explaining the project's purpose and then delve into each component: Ollama communication, Trilium setup, blog entry creation via GitLab, and finally, notifications. Each section should be concise to keep within the 1000-word limit and maintain a friendly tone.
I need to make sure that the instructions are clear but not too technical, avoiding jargon where possible or explaining it when necessary. The humor will come from the relatable examples of writing blog posts and handling PRs with enthusiasm.
```</think>```
# Creating an Ollama Blog Writer
Alright, fellow tech enthusiasts! Today, Im super excited to share how I built a custom Python tool to create my very own Ollama blog writer. Its basically my personal scribe for tech blogs—except it uses AI to generate content instead of me typing it out. Let me break down the process step by step, because honestly, its as much of a rollercoaster as writing a blog post!
## Step 1: Communicating with Ollama
First things first, I needed to connect my Python script to a running Ollama instance. Lucky for me, there are some great libraries out there that make this happen. One of my favorites is `ollama-sql` for SQL-like queries and `ollama-py` for general communication. With these tools, I could send requests to Ollama and get back the responses in a structured format.
For example, if I wanted to ask Ollama about the latest tech trends, I might send something like:
```python
import ollama as Ollama
ollama_instance = Ollama.init()
response = ollama_instance.query("What are the top AI developments this year?")
print(response)
```
This would give me a JSON response that I could parse and use for my blog. Easy peasy!
## Step 2: Integrating Trilium for Structured Notes
Speaking of which, I also wanted to make sure my blog posts were well-organized. Thats where Trilium comes in—its structured note system is perfect for keeping track of ideas before writing them up. By using prompts based on Trilium entries, my Python script can generate more focused and coherent blog posts.
For instance, if I had a Trilium entry like:
```json
{
"id": "123",
"content": "AI in customer service is booming.",
"type": "thought"
}
```
I could use that as a prompt to generate something like:
*"In the rapidly evolving landscape of AI applications, customer service has taken a quantum leap with AI-powered platforms...."*
Trilium makes it easy to manage these notes and pull them into prompts for my blog writer script.
## Step 3: Creating Blog Entries in My GitLab Repo
Now, heres where things get interesting (and slightly nerve-wracking). I wanted to create a proper blog entry that posts directly to my GitLab repo. So, I forked the [aridgwayweb/blog](https://git.aridgwayweb.com/blog) repository and started working on a branch dedicated to this project.
In my `create_blog_entry.py` script, I used GitLabs API to create a new entry. It involved authenticating with my account and constructing the appropriate JSON payload that includes all the necessary metadata—like title, summary, content, etc. The hardest part was making sure everything fit within GitLabs API constraints and formatting correctly.
Heres an excerpt of what I sent:
```python
import gitlab
gl = gitlab.Gitlab('gitlab.com', 'your_api_key')
entry = gl.entries.create(
title="The Future of AI in Software Development",
summary="Exploring how artificial intelligence is transforming software development processes.",
content=[
"AI has always been a disruptive force in technology, and its role in software development is no different.",
"From automating repetitive tasks to enhancing decision-making, AI is reshaping the industry landscape."
]
)
```
And then I notified myself that it was done!
## Step 4: Sending Notifications via Matrix
Finally, after everything was up and running, I sent a quick notification to my matrix account about the new pull request. It went something like this:
*"Hey everyone, Im super excited to announce a new PR for my Ollama blog writer project! This is pretty much the closest thing to an AI-powered scribe that doesnt involve me actually writing anything."*
Of course, its still pending approval since I need to make sure all the pieces fit together before releasing it to the public. But hey, at least Ive got a solid foundation to build on!
In conclusion, creating my Ollama Blog Writer has been an absolute blast. It combines my love for tech with Python and AI in ways I never imagined. Now, if only I could find a way to automate writing blog *reviews*
# End blog post
Now I'm not sure what the ```<think>``` tags are all about but this is a very interesting model and has produced the closest thing to what the prompt Asked for in my opinion
Deepseek-r1 is definitely one to keep an eye on and one I'll be using in the future
Until next time (But will the content come from me?)!

View File

@ -0,0 +1,82 @@
Title: When to use AI
Date: 2025-06-05 20:00
Modified: 2025-06-06 08:00
Category: AI, Data
Tags: ai, python
Slug: when-to-use-ai
Authors: Andrew Ridgway
Summary: Should we be using AI for ALL THE THINGS!?
# Human Introduction
Well.. today is the first day that the automated pipeline has generated content for the blog... still a bit of work to do including
1. establishing a permanent vectordb solution (chromadb? pg_vector?)
2. Notification to Matrix that something has happened
3. Updating Trilium so that the note is marked as blog_written=true
BUT it can take a note from trilium, generate drafts with mulitple agents, and then use RAG to have an editor go over those drafts.
I'm particularly proud of the randomness I've applied to temperature, top_p and top_k for the different draft agents. This means that each pass is giving me quite different "creativity" (as much as that can be applied to an algorithm that is essentially munging letters together that have a high probability of being together) It has created some really interesting variation for the editor to work with and getting some really interesting results.
Anyways, without further ado, I present to you the first, pipeline written, AI content for this blog
---
# When to Use AI: Navigating the Right Scenarios
Okay, so I've been getting this question a lot lately: "When should we use AI?" or even more frustratingly, "Why can't AI do this?" It's like asking when to use a hammer versus a screwdriver. Sometimes AI is the perfect tool, other times it's better left in the toolbox. Let me break down some scenarios where AI shines and where it might not be the best bet.
## The Spreadsheet Dilemma: Where AI Can help, and where it hurts
**Scenario:** Mapping work types to categories in a spreadsheet with thousands of entries, like distinguishing between "Painting," "Repainting," "Deck Painting," or "Stucco Repainting."
**Where AI Helps:**
* **Fuzzy Matching & Contextual Understanding:** AI excels at interpreting relationships between words (e.g., recognizing "Deck Painting" as a subset of "Painting"). However, traditional methods with regex or string manipulation fail here because they lack the nuanced judgment needed to handle ambiguity.
**Where AI Struggles:**
* **Precision Over Ambiguity:** Calculations requiring exact values (e.g., average durations) are better handled by deterministic algorithms rather than AIs probabilistic approach.
**Traditional Methods Are Easier for Deterministic Problems:**
* **Formula-Based Logic:** Building precise formulas for workload analysis relies on clear, unambiguous rules. AI cant replace the need for human oversight in such cases.
## When AI Shines: Contextual and Unstructured Tasks
**Scenario:** Automating customer support with chatbots or analyzing social media sentiment.
**Why AI Works Here:**
* **Natural Language Processing (NLP):** AI understands context, tone, and intent in unstructured data, making it ideal for tasks like chatbot responses or content analysis.
* **Pattern Recognition:** AI identifies trends or anomalies in large datasets that humans might miss, such as predictive maintenance in industrial settings.
**Why Traditional Methods Don't:**
* **There is no easily discernable pattern:** If the pattern doesn't exist in a deterministic sense there will be little someone can do without complex regex and 'whack a mole' style programming.
## Hybrid Approaches: The Future of Efficiency
While traditional methods remain superior for precise calculations, AI can assist in setting up initial parameters or generating insights. For example:
* **AI Proposes Formulas:** An LLM suggests a workload calculation formula based on historical data.
* **Human Checks Validity:** A human ensures the formulas accuracy before deployment.
## Key Takeaways
1. **Use AI** for tasks involving:
* Unstructured data (e.g., text, images).
* Contextual understanding and interpretation.
* Pattern recognition and trend analysis.
2. **Stick to Traditional Methods** for:
* Precise calculations with deterministic logic.
* Tasks requiring error-free accuracy (e.g., financial modeling).
## Conclusion
AI is a powerful tool but isnt a one-size-fits-all solution. Match the right approach to the task at hand—whether its interpreting natural language or crunching numbers. The key is knowing when AI complements human expertise rather than replaces it.
**Final Tip:** Always consider the trade-offs between precision and context. For tasks where nuance matters, AI is your ally; for rigid logic, trust traditional methods.
🚀

View File

@ -50,9 +50,9 @@
{% endif %} {% endif %}
<!-- Custom Fonts --> <!-- Custom Fonts -->
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"> <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
@ -137,8 +137,8 @@
</a> </a>
</li> </li>
</ul> </ul>
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>, <p class="copyright text-muted">Blog powered by <a href="https://getpelican.com">Pelican</a>,
which takes great advantage of <a href="http://python.org">Python</a>.</p> which takes great advantage of <a href="https://python.org">Python</a>.</p>
</div> </div>
</div> </div>
</div> </div>