Compare commits

...

2 Commits

Author SHA1 Message Date
c606f72d90 env vars and starting work on repo_manager 2025-05-23 15:47:25 +10:00
8a64d9c959 fix pyrefly typuing errors 2025-05-19 11:38:15 +10:00
9 changed files with 198 additions and 160 deletions

2
.gitignore vendored
View File

@ -3,3 +3,5 @@ __pycache__
.venv .venv
.aider* .aider*
.vscode .vscode
.zed
pyproject.toml

View File

@ -3,10 +3,19 @@
This creator requires you to use a working Trilium Instance and create a .env file with the following This creator requires you to use a working Trilium Instance and create a .env file with the following
``` ```
TRILIUM_HOST TRILIUM_HOST=
TRILIUM_PORT TRILIUM_PORT=
TRILIUM_PROTOCOL TRILIUM_PROTOCOL=
TRILIUM_PASS TRILIUM_PASS=
TRILIUM_TOKEN=
OLLAMA_PROTOCOL=
OLLAMA_HOST=
OLLAMA_PORT=11434
EMBEDDING_MODEL=
EDITOR_MODEL=
# This is expected in python list format example `[phi4-mini:latest, qwen3:1.7b, gemma3:latest]`
CONTENT_CREATOR_MODELS=
CHROMA_SERVER=<IP_ADDRESS>
``` ```
This container is going to be what I use to trigger a blog creation event This container is going to be what I use to trigger a blog creation event

View File

@ -23,25 +23,16 @@ services:
# Default configuration for persist_directory in chromadb/config.py # Default configuration for persist_directory in chromadb/config.py
# Read more about deployments: https://docs.trychroma.com/deployment # Read more about deployments: https://docs.trychroma.com/deployment
- chroma-data:/chroma/chroma - chroma-data:/chroma/chroma
command: "--workers 1 --host 0.0.0.0 --port 8000 --proxy-headers --log-config chromadb/log_config.yml --timeout-keep-alive 30" #command: "--host 0.0.0.0 --port 8000 --proxy-headers --log-config chromadb/log_config.yml --timeout-keep-alive 30"
environment: environment:
- IS_PERSISTENT=TRUE - IS_PERSISTENT=TRUE
- CHROMA_SERVER_AUTHN_PROVIDER=${CHROMA_SERVER_AUTHN_PROVIDER}
- CHROMA_SERVER_AUTHN_CREDENTIALS_FILE=${CHROMA_SERVER_AUTHN_CREDENTIALS_FILE}
- CHROMA_SERVER_AUTHN_CREDENTIALS=${CHROMA_SERVER_AUTHN_CREDENTIALS}
- CHROMA_AUTH_TOKEN_TRANSPORT_HEADER=${CHROMA_AUTH_TOKEN_TRANSPORT_HEADER}
- PERSIST_DIRECTORY=${PERSIST_DIRECTORY:-/chroma/chroma}
- CHROMA_OTEL_EXPORTER_ENDPOINT=${CHROMA_OTEL_EXPORTER_ENDPOINT}
- CHROMA_OTEL_EXPORTER_HEADERS=${CHROMA_OTEL_EXPORTER_HEADERS}
- CHROMA_OTEL_SERVICE_NAME=${CHROMA_OTEL_SERVICE_NAME}
- CHROMA_OTEL_GRANULARITY=${CHROMA_OTEL_GRANULARITY}
- CHROMA_SERVER_NOFILE=${CHROMA_SERVER_NOFILE}
restart: unless-stopped # possible values are: "no", always", "on-failure", "unless-stopped" restart: unless-stopped # possible values are: "no", always", "on-failure", "unless-stopped"
ports: ports:
- "8000:8000" - "8000:8000"
healthcheck: healthcheck:
# Adjust below to match your container port # Adjust below to match your container port
test: [ "CMD", "curl", "-f", "http://localhost:8000/api/v2/heartbeat" ] test:
["CMD", "curl", "-f", "http://localhost:8000/api/v2/heartbeat"]
interval: 30s interval: 30s
timeout: 10s timeout: 10s
retries: 3 retries: 3

View File

@ -0,0 +1,53 @@
# When Should You Use AI?
Right off the bat? Well, lets talk about when *not* using an LLM is actually pretty much like trying to build that perfect pavlova with a robot: Sure, they might have all these instructions and ingredients laid out for them (or so it seems), but can you really trust this machine to understand those subtle nuances of temperature or timing? No. And lets be real here if were talking about tasks requiring precise logic like financial calculations or scientific modeling - well, that sounds more suited to the human brain.
But where does AI actually shine bright and come in handy?
* **Pattern Recognition:** Spotting trends within data is one of those areas LLMs are pretty darn good at. Whether its identifying patterns across a dataset for insights (or even generating creative ideas based on existing information), they can do that with speed, efficiency - not to mention accuracy.
**And when shouldnt you use AI?**
* **Tasks Requiring Precise Logic:** If your job is something needing absolute precision like crunching numbers or modeling scientific data where a miscalculation could mean millions in losses for the company. Well… maybe hold off on letting an LLM take over.
* **Situations Demanding Critical Thinking**: Lets be honest, if you need to make judgment calls based upon complex factors that even humans can struggle with then it might not just do a good job; but rather fall short.
LMLs are great at mimicking intelligence. But they dont actually understand things the way we human beings (or I should say: non-humans) comprehend them.
* **Processes Where Errors Have Serious Consequences:** If your work involves tasks where errors can have serious consequences, then you probably want to keep it in human hands.
**The Bottom Line**
AI is a powerful tool. But like any good chef knows even the best kitchen appliances can't replace their own skills and experience when making that perfect pavlova (or for us humans: delivering results). Its about finding balance between leveraging AI capabilities, while also relying on our critical thinking - and human intuition.
Dont get me wrong here; Im not anti-AI. But lets be sensible use it where it's truly helpful but don't forget to keep those tasks in the hands of your fellow humans (or at least their non-humans).
---
**Note for Editors:** This draft is designed with ease-of-editing and clarity as a priority, so feel free to adjust any sections that might need further refinement or expansion. I aimed this piece towards an audience who appreciates both humor-infused insights into the world of AI while also acknowledging its limitations in certain scenarios.
```markdown
# When Should You Use AI?
Right off the bat? Well, lets talk about when *not* using LLM is actually pretty much like trying to build that perfect pavlova with a robot: Sure, they might have all these instructions and ingredients laid out for them (or so it seems), but can you really trust this machine to understand those subtle nuances of temperature or timing? No. And lets be real here if were talking about tasks requiring precise logic like financial calculations or scientific modeling - well, that sounds more suited to the human brain.
But where does AI actually shine bright and come in handy?
* **Pattern Recognition:** Spotting trends within data is one of those areas LLMs are pretty darn good at. Whether its identifying patterns across a dataset for insights (or even generating creative ideas based on existing information), they can do that with speed, efficiency - not to mention accuracy.
**And when shouldnt you use AI?**
* **Tasks Requiring Precise Logic:** If your job is something needing absolute precision like crunching numbers or modeling scientific data where a miscalculation could mean millions in losses for the company. Well… maybe hold off on letting an LLM take over.
* **Situations Demanding Critical Thinking**: Lets be honest, if you need to make judgment calls based upon complex factors that even humans can struggle with then it might not just do a good job; but rather fall short.
LMLs are great at mimicking intelligence. But they dont actually understand things the way we human beings (or I should say: non-humans) comprehend them.
* **Processes Where Errors Have Serious Consequences:** If your work involves tasks where errors can have serious consequences, then you probably want to keep it in human hands.
**The Bottom Line**
AI is a powerful tool. But like any good chef knows even the best kitchen appliances can't replace their own skills and experience when making that perfect pavlova (or for us humans: delivering results). Its about finding balance between leveraging AI capabilities, while also relying on our critical thinking - and human intuition.
Dont get me wrong here; Im not anti-AI. But lets be sensible use it where it's truly helpful but don't forget to keep those tasks in the hands of your fellow humans (or at least their non-humans).
---
**Note for Editors:** This draft is designed with ease-of-editing and clarity as a priority, so feel free to adjust any sections that might need further refinement or expansion. I aimed this piece towards an audience who appreciates both humor-infused insights into the world of AI while also acknowledging its limitations in certain scenarios.
```

View File

@ -1,21 +1,22 @@
import os, re import os, re, json, random, time
from ollama import Client from ollama import Client
import chromadb, time import chromadb
from langchain_ollama import ChatOllama from langchain_ollama import ChatOllama
class OllamaGenerator: class OllamaGenerator:
def __init__(self, title: str, content: str, model: str, inner_title: str): def __init__(self, title: str, content: str, inner_title: str):
self.title = title self.title = title
self.inner_title = inner_title self.inner_title = inner_title
self.content = content self.content = content
self.response = None
self.chroma = chromadb.HttpClient(host="172.18.0.2", port=8000) self.chroma = chromadb.HttpClient(host="172.18.0.2", port=8000)
ollama_url = f"{os.environ["OLLAMA_PROTOCOL"]}://{os.environ["OLLAMA_HOST"]}:{os.environ["OLLAMA_PORT"]}" ollama_url = f"{os.environ["OLLAMA_PROTOCOL"]}://{os.environ["OLLAMA_HOST"]}:{os.environ["OLLAMA_PORT"]}"
self.ollama_client = Client(host=ollama_url) self.ollama_client = Client(host=ollama_url)
self.ollama_model = model self.ollama_model = os.environ["EDITOR_MODEL"]
self.embed_model = "snowflake-arctic-embed2:latest" self.embed_model = os.environ["EMBEDDING_MODEL"]
self.agent_models = ["openthinker:7b", "deepseek-r1:7b", "qwen2.5:7b", "gemma3:latest"] self.agent_models = json.loads(os.environ["CONTENT_CREATOR_MODELS"])
self.llm = ChatOllama(model=self.ollama_model, temperature=0.7) self.llm = ChatOllama(model=self.ollama_model, temperature=0.6, top_p=0.5) #This is the level head in the room
self.prompt_inject = f""" self.prompt_inject = f"""
You are a journalist, Software Developer and DevOps expert You are a journalist, Software Developer and DevOps expert
writing a 1000 word draft blog for other tech enthusiasts. writing a 1000 word draft blog for other tech enthusiasts.
@ -53,12 +54,20 @@ class OllamaGenerator:
def generate_draft(self, model) -> str: def generate_draft(self, model) -> str:
'''Generate a draft blog post using the specified model''' '''Generate a draft blog post using the specified model'''
try: try:
agent_llm = ChatOllama(model=model, temperature=0.8) # the idea behind this is to make the "creativity" random amongst the content creators
# contorlling temperature will allow cause the output to allow more "random" connections in sentences
# Controlling top_p will tighten or loosen the embedding connections made
# The result should be varied levels of "creativity" in the writing of the drafts
# for more see https://python.langchain.com/v0.2/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html
temp = random.uniform(0.5, 1.0)
top_p = random.uniform(0.4, 0.8)
top_k = int(random.uniform(30, 80))
agent_llm = ChatOllama(model=model, temperature=temp, top_p=top_p, top_k=top_k)
messages = [ messages = [
("system", self.prompt_inject), ("system", self.prompt_inject),
("human", "make the blog post in a format to be edited easily" ) ("human", "make the blog post in a format to be edited easily" )
] ]
self.response = agent_llm.invoke(messages) response = agent_llm.invoke(messages)
# self.response = self.ollama_client.chat(model=model, # self.response = self.ollama_client.chat(model=model,
# messages=[ # messages=[
# { # {
@ -66,7 +75,9 @@ class OllamaGenerator:
# 'content': f'{self.prompt_inject}', # 'content': f'{self.prompt_inject}',
# }, # },
# ]) # ])
return self.response.text()#['message']['content'] #print ("draft")
#print (response)
return response.text()#['message']['content']
except Exception as e: except Exception as e:
raise Exception(f"Failed to generate blog draft: {e}") raise Exception(f"Failed to generate blog draft: {e}")
@ -117,6 +128,7 @@ class OllamaGenerator:
collection_query = collection.query(query_embeddings=query_embed, n_results=100) collection_query = collection.query(query_embeddings=query_embed, n_results=100)
print("Showing pertinent info from drafts used in final edited edition") print("Showing pertinent info from drafts used in final edited edition")
pertinent_draft_info = '\n\n'.join(collection.query(query_embeddings=query_embed, n_results=100)['documents'][0]) pertinent_draft_info = '\n\n'.join(collection.query(query_embeddings=query_embed, n_results=100)['documents'][0])
#print(pertinent_draft_info)
prompt_human = f"Generate the final document using this information from the drafts: {pertinent_draft_info} - ONLY OUTPUT THE MARKDOWN" prompt_human = f"Generate the final document using this information from the drafts: {pertinent_draft_info} - ONLY OUTPUT THE MARKDOWN"
print("Generating final document") print("Generating final document")
messages = [("system", prompt_system), ("human", prompt_human),] messages = [("system", prompt_system), ("human", prompt_human),]
@ -128,6 +140,8 @@ class OllamaGenerator:
# 'content': f'{prompt_enhanced}', # 'content': f'{prompt_enhanced}',
# }, # },
# ]) # ])
#print ("Markdown Generated")
#print (self.response)
return self.response#['message']['content'] return self.response#['message']['content']
except Exception as e: except Exception as e:

View File

@ -22,6 +22,5 @@ for note in tril_notes:
os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title']) os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title'])
ai_gen = omg.OllamaGenerator(os_friendly_title, ai_gen = omg.OllamaGenerator(os_friendly_title,
tril_notes[note]['content'], tril_notes[note]['content'],
"gemma3:latest",
tril_notes[note]['title']) tril_notes[note]['title'])
ai_gen.save_to_file(f"/blog_creator/generated_files/{os_friendly_title}.md") ai_gen.save_to_file(f"/blog_creator/generated_files/{os_friendly_title}.md")

View File

@ -1,48 +0,0 @@
import os
import sys
from git import Repo
# Set these variables accordingly
REPO_OWNER = "your_repo_owner"
REPO_NAME = "your_repo_name"
def clone_repo(repo_url, branch="main"):
Repo.clone_from(repo_url, ".", branch=branch)
def create_markdown_file(file_name, content):
with open(f"{file_name}.md", "w") as f:
f.write(content)
def commit_and_push(file_name, message):
repo = Repo(".")
repo.index.add([f"{file_name}.md"])
repo.index.commit(message)
repo.remote().push()
def create_new_branch(branch_name):
repo = Repo(".")
repo.create_head(branch_name).checkout()
repo.head.reference.set_tracking_url(f"https://your_git_server/{REPO_OWNER}/{REPO_NAME}.git/{branch_name}")
repo.remote().push()
if __name__ == "__main__":
if len(sys.argv) < 3:
print("Usage: python push_markdown.py <repo_url> <markdown_file_name>")
sys.exit(1)
repo_url = sys.argv[1]
file_name = sys.argv[2]
# Clone the repository
clone_repo(repo_url)
# Create a new Markdown file with content
create_markdown_file(file_name, "Hello, World!\n")
# Commit and push changes to the main branch
commit_and_push(file_name, f"Add {file_name}.md")
# Create a new branch named after the Markdown file
create_new_branch(file_name)
print(f"Successfully created '{file_name}' branch with '{file_name}.md'.")

View File

@ -1,39 +1,52 @@
import os import os, shutil
from git import Git from git import Repo
from git.repo import BaseRepository from git.exc import GitCommandError
from git.exc import InvalidGitRepositoryError
from git.remote import RemoteAction
class GitRepository:
# This is designed to be transitory it will desctruvtively create the repo at repo_path
# if you have uncommited changes you can kiss them goodbye!
# Don't use the repo created by this function for dev -> its a tool!
# It is expected that when used you will add, commit, push, delete
def __init__(self, repo_path, username=None, password=None):
git_protocol = os.environ["GIT_PROTOCOL"]
git_remote = os.environ["GIT_REMOTE"]
remote = f"{git_protocol}://{username}:{password}@{git_remote}"
def try_something(test): if os.path.exists(repo_path):
shutil.rmtree(repo_path)
# Set the path to your blog repo here Repo.clone_from(remote, repo_path)
blog_repo = "/path/to/your/blog/repo" self.repo = Repo(repo_path)
self.username = username
self.password = password
def clone(self, remote_url, destination_path):
"""Clone a Git repository with authentication"""
try:
self.repo.clone(remote_url, destination_path)
return True
except GitCommandError as e:
print(f"Cloning failed: {e}")
return False
# Checkout a new branch and create a new file for our blog post def fetch(self, remote_name='origin', ref_name='main'):
branch_name = "new-post" """Fetch updates from a remote repository with authentication"""
try: try:
repo = Git(blog_repo) self.repo.remotes[remote_name].fetch(ref_name=ref_name)
repo.checkout("-b", branch_name, "origin/main") return True
with open("my-blog-post.md", "w") as f: except GitCommandError as e:
f.write(content) print(f"Fetching failed: {e}")
except InvalidGitRepositoryError: return False
# Handle repository errors gracefully
pass
# Add and commit the changes to Git def pull(self, remote_name='origin', ref_name='main'):
repo.add("my-blog-post.md") """Pull updates from a remote repository with authentication"""
repo.commit("-m", "Added new blog post about DevOps best practices.") try:
self.repo.remotes[remote_name].pull(ref_name=ref_name)
# Push the changes to Git and create a PR return True
repo.remote().push("refs/heads/{0}:refs/for/main".format(branch_name), "--set-upstream") except GitCommandError as e:
base_branch = "origin/main" print(f"Pulling failed: {e}")
target_branch = "main" return False
pr_title = "DevOps best practices"
try:
repo.create_head("{0}-{1}", base=base_branch, message="{}".format(pr_title))
except RemoteAction.GitExitStatus as e:
# Handle Git exit status errors gracefully
pass
def get_branches(self):
"""List all branches in the repository"""
return [branch.name for branch in self.repo.branches]

View File

@ -18,9 +18,13 @@ class TrilumNotes:
print("Please run get_token and set your token") print("Please run get_token and set your token")
else: else:
self.ea = ETAPI(self.server_url, self.token) self.ea = ETAPI(self.server_url, self.token)
self.new_notes = None
self.note_content = None
def get_token(self): def get_token(self):
ea = ETAPI(self.server_url) ea = ETAPI(self.server_url)
if self.tril_pass == None:
raise ValueError("Trillium password can not be none")
token = ea.login(self.tril_pass) token = ea.login(self.tril_pass)
print(token) print(token)
print("I would recomend you update the env file with this tootsweet!") print("I would recomend you update the env file with this tootsweet!")
@ -40,10 +44,11 @@ class TrilumNotes:
def get_notes_content(self): def get_notes_content(self):
content_dict = {} content_dict = {}
if self.new_notes is None:
raise ValueError("How did you do this? new_notes is None!")
for note in self.new_notes['results']: for note in self.new_notes['results']:
content_dict[note['noteId']] = {"title" : f"{note['title']}", content_dict[note['noteId']] = {"title" : f"{note['title']}",
"content" : f"{self._get_content(note['noteId'])}" "content" : f"{self._get_content(note['noteId'])}"
} }
self.note_content = content_dict self.note_content = content_dict
return content_dict return content_dict