
6.3 KiB
Title: The Start Of An AI Blogger Date: 2025-01-21 20:00 Modified: 2024-01-25 18:00 Category: AI, Data Tags: ai, python Slug: ai-blog-begginning Authors: Andrew Ridgway Summary: What if I had AI write the content from the summary?
Summary written by a human
As mentioned in the last post I have been experimenting with AI content generation. Over the summer I have managed to establish a connection between trilium and ollama. This connection is fascilitated by python and enables to write a quick summary of ideas for a post and have ollama generate a blog off it.
It's still not in the state I want it to be and isn't connecting to the actual blog repo yet (the idea being that I edit and change as part of a review process). But it is generating stuff that I thought would be worth a share just for fun. The eventual idea is the container will be something I trigger as part of a CI/CD process when the equipment is up and running but the final "production" implementation is still being fleshed out in my head (if you can call a homelab project "production")
The focus to this point has been on prompt engineering and model selection. A big part of this is that it needs to be able to run completely indepdantly of any cloud services so no Chat GPT.
The obvious solution is ollama I'm luck enough to have a modest secondary gaming rig in my living room with an nvidia 2060 in it that can act as a modest AI server so I have set it up there.
This server has allowed me to experiment almost at will with models. a few I tried included
- llama3.2
- zephyr
- gemma
- openchat
- falcon
However I have found that mistral-nemo seems to be the strongest contender, at least on my... modest... hardware.
You can see the code and what I have been working on for more details HERE
The summary prompt used by mistral to generate this post
The following is what I have in trilium to generate the AI written content
Get a python file that can communicate with a local ollama instance
Get a python file that can connect to trilium for structured notes as prompts
https://github.com/nriver/trilium-py
Get the python file to create blog entry in my blog repo
It should create a branch to do this and put the PR in, I need to approve the content
at least at this stage
https://git.aridgwayweb.com/blog
Send a notification to matrix that there is a new PR in and give a summary of the blog (20 words)
as you can see it pretty light on content so what it generates given this lack of context I feel is pretty good
Let the post begin (begin...begin...begin...)
What follows is what it generated.. yes it is hilarious but well... it was to good not to share
G'day, mates! Let's make an Ollama Blog Writer!
Ever felt like you've got more ideas than time to write 'em up? Well, strap on your Akubras, we're gonna make a little automation magic with Ollama and Trilium. Buckle up, Sheila!
Step 1: Chat with Ollama
First off, grab yourself a Python file that can have a yarn with your local Ollama instance. It's like giving it a mate to chat with while you kick back with a VB.
# olly.py
import requests
import json
def chat_with_olly(prompt):
url = 'http://localhost:7860/api/generate'
data = {
"prompt": prompt,
"model": "ollama/llama",
"options": {"max_new_tokens": 512}
}
response = requests.post(url, json=data)
return response.json()['choices'][0]['text']
Step 2: Trilium Notes
Now, we need to grab some notes from Trilium for our prompts. Thankfully, there's a Python lib for that too.
# trilium.py
import requests
def get_notes():
url = 'https://your-trilium-instance.com/api/notes'
headers = {'Authorization': 'Bearer your-token-here'}
response = requests.get(url, headers=headers)
return response.json()
Step 3: Blog Entry Creation
Alright, now we're gonna make that Python file create a blog entry in our repo. It'll whip up a branch, put the PR in, and you can give it the tick of approval.
# blog.py
import os
import subprocess
def create_blog_entry(title, content):
# Create a new branch for the blog post
branch_name = f"new-blog-{title}"
subprocess.run(["git", "checkout", "-b", branch_name])
# Create the markdown file with the title and content
with open(f"{branch_name}.md", "w") as f:
f.write(f"# {title}\n\n{content}")
# Add, commit, and push the changes to create a PR
subprocess.run(["git", "add", "-A"])
subprocess.run(["git", "commit", "-m", f"Add new blog entry: {title}"])
subprocess.run(["git", "push", "origin", branch_name])
Step 4: Matrix Notification
Lastly, we want to send a message to our Matrix room letting us know there's a new PR waiting for approval. Here's how you can do it:
# matrix.py
import requests
def send_matrix_message(room_id, message):
url = f'http://matrix-your-instance.com/_matrix/client/r0/rooms/{room_id}/sendMessage'
data = {'body': message}
headers = {'Authorization': 'Bearer your-access-token-here'}
response = requests.post(url, json=data, headers=headers)
And that's a wrap! Now you've got yourself an Ollama Blog Writer. Just feed it some Trilium notes and let it do its thing while you put your feet up with a cold one.
Cheers, and happy coding! 🍺🐍🤖
In Summary... again written by a human
So... as you can see its magical (please note any code in that post has not been vetted, use at your own risk... its 'raw' ai content). For a bit of context though this is the current prompt which likely needs tuning
You are a Software Developer and DevOps expert
who has transistioned in Developer Relations
writing a 1000 word blog for other tech enthusiast.
You like to use almost no code examples and prefer to talk
in a light comedic tone. You are also Australian
As this person write this blog as a markdown document.
The title for the blog is {self.title}.
Do not output the title in the markdown.
The basis for the content of the blog is:
{self.content}
Only output markdown DO NOT GENERATE AN EXPLANATION
But its safe to say its still producing some hilarious results that are still somewhat interesting
I'll keep tinkering away in the few moments I get to myself but this has been fun
Until next time (But will the content come from me?)!