repo_work #1

Closed
armistace wants to merge 40 commits from repo_work into master
13 changed files with 337 additions and 257 deletions

5
.gitignore vendored
View File

@ -2,3 +2,8 @@
__pycache__ __pycache__
.venv .venv
.aider* .aider*
.vscode
.zed
pyproject.toml
.ropeproject
generated_files/*

View File

@ -7,8 +7,12 @@ ENV PYTHONUNBUFFERED 1
ADD src/ /blog_creator ADD src/ /blog_creator
RUN apt-get update && apt-get install -y rustc cargo python-is-python3 pip python3.12-venv libmagic-dev RUN apt-get update && apt-get install -y rustc cargo python-is-python3 pip python3-venv libmagic-dev git
# Need to set up git here or we get funky errors
RUN git config --global user.name "Blog Creator"
RUN git config --global user.email "ridgway.infrastructure@gmail.com"
RUN git config --global push.autoSetupRemote true
#Get a python venv going as well cause safety
RUN python -m venv /opt/venv RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH" ENV PATH="/opt/venv/bin:$PATH"

View File

@ -3,10 +3,19 @@
This creator requires you to use a working Trilium Instance and create a .env file with the following This creator requires you to use a working Trilium Instance and create a .env file with the following
``` ```
TRILIUM_HOST TRILIUM_HOST=
TRILIUM_PORT TRILIUM_PORT=
TRILIUM_PROTOCOL TRILIUM_PROTOCOL=
TRILIUM_PASS TRILIUM_PASS=
TRILIUM_TOKEN=
OLLAMA_PROTOCOL=
OLLAMA_HOST=
OLLAMA_PORT=11434
EMBEDDING_MODEL=
EDITOR_MODEL=
# This is expected in python list format example `[phi4-mini:latest, qwen3:1.7b, gemma3:latest]`
CONTENT_CREATOR_MODELS=
CHROMA_SERVER=<IP_ADDRESS>
``` ```
This container is going to be what I use to trigger a blog creation event This container is going to be what I use to trigger a blog creation event

View File

@ -1,3 +1,7 @@
networks:
net:
driver: bridge
services: services:
blog_creator: blog_creator:
build: build:
@ -8,4 +12,33 @@ services:
- .env - .env
volumes: volumes:
- ./generated_files/:/blog_creator/generated_files - ./generated_files/:/blog_creator/generated_files
networks:
- net
chroma:
image: chromadb/chroma
container_name: chroma
volumes:
# Be aware that indexed data are located in "/chroma/chroma/"
# Default configuration for persist_directory in chromadb/config.py
# Read more about deployments: https://docs.trychroma.com/deployment
- chroma-data:/chroma/chroma
#command: "--host 0.0.0.0 --port 8000 --proxy-headers --log-config chromadb/log_config.yml --timeout-keep-alive 30"
environment:
- IS_PERSISTENT=TRUE
restart: unless-stopped # possible values are: "no", always", "on-failure", "unless-stopped"
ports:
- "8000:8000"
healthcheck:
# Adjust below to match your container port
test:
["CMD", "curl", "-f", "http://localhost:8000/api/v2/heartbeat"]
interval: 30s
timeout: 10s
retries: 3
networks:
- net
volumes:
chroma-data:
driver: local

2
generated_files/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -1,83 +0,0 @@
<think>
Alright, I've got this query from someone who wants to create an Ollama Blog Writer using Python. Let me break down what they're asking for.
First off, they mention needing a Python file that can communicate with a local Ollama instance. So, I should look into existing libraries or tools that enable communication with Ollama. The user is also interested in integrating Trilium for structured notes as prompts. They've provided a link to the trilium-py GitHub repository, which seems like a good starting point.
Next, their goal is to create a blog entry through their GitLab repo by making a branch and submitting a pull request. They want the PR content approved before proceeding further. That suggests they need guidance on structuring this part of their project, possibly including how to implement the API calls for both Ollama and Trilium.
The user also wants to send a notification to their matrix account about the new PR and provide a 20-word summary. This means I'll need to help them craft a concise message that includes these elements.
Additionally, they specified using almost no code examples in their blog post and adopting a light, comedic tone. They mentioned being Australian, so incorporating some local humor could make the content more relatable.
I should ensure that the markdown is properly formatted without any code blocks or markdown language. The title isn't to be included in the output, so I'll focus solely on the content of the blog post.
Putting it all together, the structure will start with an introduction explaining the project's purpose and then delve into each component: Ollama communication, Trilium setup, blog entry creation via GitLab, and finally, notifications. Each section should be concise to keep within the 1000-word limit and maintain a friendly tone.
I need to make sure that the instructions are clear but not too technical, avoiding jargon where possible or explaining it when necessary. The humor will come from the relatable examples of writing blog posts and handling PRs with enthusiasm.
</think>
# Creating an Ollama Blog Writer
Alright, fellow tech enthusiasts! Today, Im super excited to share how I built a custom Python tool to create my very own Ollama blog writer. Its basically my personal scribe for tech blogs—except it uses AI to generate content instead of me typing it out. Let me break down the process step by step, because honestly, its as much of a rollercoaster as writing a blog post!
## Step 1: Communicating with Ollama
First things first, I needed to connect my Python script to a running Ollama instance. Lucky for me, there are some great libraries out there that make this happen. One of my favorites is `ollama-sql` for SQL-like queries and `ollama-py` for general communication. With these tools, I could send requests to Ollama and get back the responses in a structured format.
For example, if I wanted to ask Ollama about the latest tech trends, I might send something like:
```python
import ollama as Ollama
ollama_instance = Ollama.init()
response = ollama_instance.query("What are the top AI developments this year?")
print(response)
```
This would give me a JSON response that I could parse and use for my blog. Easy peasy!
## Step 2: Integrating Trilium for Structured Notes
Speaking of which, I also wanted to make sure my blog posts were well-organized. Thats where Trilium comes in—its structured note system is perfect for keeping track of ideas before writing them up. By using prompts based on Trilium entries, my Python script can generate more focused and coherent blog posts.
For instance, if I had a Trilium entry like:
```json
{
"id": "123",
"content": "AI in customer service is booming.",
"type": "thought"
}
```
I could use that as a prompt to generate something like:
*"In the rapidly evolving landscape of AI applications, customer service has taken a quantum leap with AI-powered platforms...."*
Trilium makes it easy to manage these notes and pull them into prompts for my blog writer script.
## Step 3: Creating Blog Entries in My GitLab Repo
Now, heres where things get interesting (and slightly nerve-wracking). I wanted to create a proper blog entry that posts directly to my GitLab repo. So, I forked the [aridgwayweb/blog](https://git.aridgwayweb.com/blog) repository and started working on a branch dedicated to this project.
In my `create_blog_entry.py` script, I used GitLabs API to create a new entry. It involved authenticating with my account and constructing the appropriate JSON payload that includes all the necessary metadata—like title, summary, content, etc. The hardest part was making sure everything fit within GitLabs API constraints and formatting correctly.
Heres an excerpt of what I sent:
```python
import gitlab
gl = gitlab.Gitlab('gitlab.com', 'your_api_key')
entry = gl.entries.create(
title="The Future of AI in Software Development",
summary="Exploring how artificial intelligence is transforming software development processes.",
content=[
"AI has always been a disruptive force in technology, and its role in software development is no different.",
"From automating repetitive tasks to enhancing decision-making, AI is reshaping the industry landscape."
]
)
```
And then I notified myself that it was done!
## Step 4: Sending Notifications via Matrix
Finally, after everything was up and running, I sent a quick notification to my matrix account about the new pull request. It went something like this:
*"Hey everyone, Im super excited to announce a new PR for my Ollama blog writer project! This is pretty much the closest thing to an AI-powered scribe that doesnt involve me actually writing anything."*
Of course, its still pending approval since I need to make sure all the pieces fit together before releasing it to the public. But hey, at least Ive got a solid foundation to build on!
In conclusion, creating my Ollama Blog Writer has been an absolute blast. It combines my love for tech with Python and AI in ways I never imagined. Now, if only I could find a way to automate writing blog *reviews*

View File

@ -1,46 +0,0 @@
<think>
Okay, so I'm trying to wrap my head around this PowerBI experience for a data product. Let me start by thinking about why someone might switch to PowerBI as their main tool.
First, the blog title says it's about API performance. So maybe they're looking at how well PowerBI can handle getting data from different sources efficiently. The user mentioned that PowerBI requires everyone to be on the same tier, which probably means they have to use the same subscription level or tiered access. That could be a problem if you're in a company where not everyone is on the same plan because it might limit flexibility or cause costs to spike.
Next, pulling data with PowerBI seems limited. They say it's only useful for small tables. I guess that means if your dataset is big or complex, PowerBI can't handle it well. Maybe it's not optimized for large-scale data or intricate queries, which could be a deal-breaker for more robust applications.
Then there's the issue of being part of the Microsoft ecosystem. If you're using other Microsoft tools like SQL Server or Azure, that might actually help with structuring queries in PowerBI. But if you're outside this ecosystem, it fails. Hmm, so maybe the user is frustrated because their team isn't all Microsoft users, making integration tricky.
Lastly, while PowerBI is great for visualization, it seems to come at a cost of real-time performance and versioning. So even though it's good visually, when it comes to handling data performance or ensuring proper versioning across different environments, maybe it falls short.
Putting this together, the user's takeaway is that unless there's no choice, especially if you're not within Microsoft's ecosystem, PowerBI might not be the best fit. It could lead to failures and hurt their product's performance. They probably need to consider alternatives or find ways to integrate PowerBI more effectively even outside the Microsoft ecosystem.
</think>
# The Curious Case of PowerBI in Data Product Development
Alright, let me spill the beans on my latest adventure with PowerBI—spoiler alert: it wasnt all smooth sailing. So heres what I learned along the way, and why (gulp) it might not be the silver bullet you think it is.
## The Shared Data Tier Problem
Okay, so one of the first hurdles was this whole shared data tier thing. Let me tell ya, it felt like a non-starter for most companies out there. Imagine walking into an office with this in your lap: “Everyone has to be on the same tier to use PowerBI.” Yeah, sounds like a lot of bureaucracy just to get some data flowing. But then I started thinking—what if theyre not? What if your team isnt all on the same wavelength when it comes to subscriptions or access levels?
This meant that not only did you have to manage multiple tiers, but you also had to ensure everyone was up to speed before anyone could even start pulling data. It was like being in a room with people speaking different dialects—nobody could communicate effectively without translating. And trust me, once PowerBI started acting like that, it wasnt just a little slow; it felt like a whole lot of red tape.
## Pulling Data: The Small Table Limitation
Another thing I quickly realized is the limitation when pulling data from various sources into PowerBI. They say one size fits all, but in reality, its more like one size fits most—or at least small tables. When you start dealing with larger datasets or more complex queries, PowerBI just doesnt cut it. Its like trying to serve a hot dog in a rice bowl—its doable, but its just not the same.
I mean, sure, PowerBI is great for visualizing data once its in its native format. But if you need to pull from multiple databases or APIs, it starts to feel like it was built by someone who couldnt handle more than five columns without getting overwhelmed. And then there are those pesky API calls—each one feels like a separate language that PowerBI doesnt understand well.
## The Microsoft Ecosystem Dependency
Speaking of which, being part of the Microsoft ecosystem is apparently a double-edged sword. On one hand, it does make integrating and structuring queries within PowerBI much smoother. Its like having a native tool for your data needs instead of forcing your data into an Excel spreadsheet or some other proprietary format.
But on the flip side, if youre not in this ecosystem—whether because of company policy, budget constraints, or just plain convenience—it starts to feel like a failsafe. Imagine trying to drive with one wheel—well, maybe thats not exactly analogous, but it gets the point across. Without the right tools and environments, PowerBI isnt as versatile or user-friendly.
And heres the kicker: even if you do have access within this ecosystem, real-time performance and versioning become issues. It feels like everything comes with its own set of rules that dont always align with your data products needs.
## The Visualization vs. Performance Trade-Off
Now, I know what some of you are thinking—PowerBI is all about making data beautiful, right? And it does a fantastic job at that. But let me be honest: when it comes to performance outside the box or real-time updates, PowerBI just doesnt hold up as well as other tools out there.
Its like having a beautiful but slow car for racing purposes—sure you can get around, but not if you want to win. Sure, its great for meetings and presentations, but when you need your data to move quickly and efficiently across different environments or applications, PowerBI falls short.
## The Takeaway
So after all that, heres my bottom line: unless youre in the Microsoft ecosystem—top to tail—you might be better off looking elsewhere. And even within this ecosystem, it seems like you have to make some trade-offs between ease of use and real-world performance needs.
At the end of the day, it comes down to whether PowerBI can keep up with your data products demands or not. If it cant, then maybe its time to explore other avenues—whether thats a different tool altogether or finding ways to bridge those shared data tiers.
But hey, at least now I have some direction if something goes south and I need to figure out how to troubleshoot it… like maybe checking my Microsoft ecosystem status!

View File

@ -2,3 +2,5 @@ ollama
trilium-py trilium-py
gitpython gitpython
PyGithub PyGithub
chromadb
langchain-ollama

View File

@ -1,40 +1,151 @@
import os import os, re, json, random, time, string
from ollama import Client from ollama import Client
import chromadb
from langchain_ollama import ChatOllama
class OllamaGenerator: class OllamaGenerator:
def __init__(self, title: str, content: str, model: str): def __init__(self, title: str, content: str, inner_title: str):
self.title = title self.title = title
self.inner_title = inner_title
self.content = content self.content = content
self.response = None
self.chroma = chromadb.HttpClient(host="172.18.0.2", port=8000)
ollama_url = f"{os.environ["OLLAMA_PROTOCOL"]}://{os.environ["OLLAMA_HOST"]}:{os.environ["OLLAMA_PORT"]}" ollama_url = f"{os.environ["OLLAMA_PROTOCOL"]}://{os.environ["OLLAMA_HOST"]}:{os.environ["OLLAMA_PORT"]}"
self.ollama_client = Client(host=ollama_url) self.ollama_client = Client(host=ollama_url)
self.ollama_model = model self.ollama_model = os.environ["EDITOR_MODEL"]
self.embed_model = os.environ["EMBEDDING_MODEL"]
def generate_markdown(self) -> str: self.agent_models = json.loads(os.environ["CONTENT_CREATOR_MODELS"])
self.llm = ChatOllama(model=self.ollama_model, temperature=0.6, top_p=0.5) #This is the level head in the room
prompt = f""" self.prompt_inject = f"""
You are a Software Developer and DevOps expert You are a journalist, Software Developer and DevOps expert
who has transistioned in Developer Relations writing a 1000 word draft blog for other tech enthusiasts.
writing a 1000 word blog for other tech enthusiast.
You like to use almost no code examples and prefer to talk You like to use almost no code examples and prefer to talk
in a light comedic tone. You are also Australian in a light comedic tone. You are also Australian
As this person write this blog as a markdown document. As this person write this blog as a markdown document.
The title for the blog is {self.title}. The title for the blog is {self.inner_title}.
Do not output the title in the markdown. Do not output the title in the markdown.
The basis for the content of the blog is: The basis for the content of the blog is:
{self.content} {self.content}
Only output markdown DO NOT GENERATE AN EXPLANATION """
def split_into_chunks(self, text, chunk_size=100):
'''Split text into chunks of size chunk_size'''
words = re.findall(r'\S+', text)
chunks = []
current_chunk = []
word_count = 0
for word in words:
current_chunk.append(word)
word_count += 1
if word_count >= chunk_size:
chunks.append(' '.join(current_chunk))
current_chunk = []
word_count = 0
if current_chunk:
chunks.append(' '.join(current_chunk))
return chunks
def generate_draft(self, model) -> str:
'''Generate a draft blog post using the specified model'''
try:
# the idea behind this is to make the "creativity" random amongst the content creators
# contorlling temperature will allow cause the output to allow more "random" connections in sentences
# Controlling top_p will tighten or loosen the embedding connections made
# The result should be varied levels of "creativity" in the writing of the drafts
# for more see https://python.langchain.com/v0.2/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html
temp = random.uniform(0.5, 1.0)
top_p = random.uniform(0.4, 0.8)
top_k = int(random.uniform(30, 80))
agent_llm = ChatOllama(model=model, temperature=temp, top_p=top_p, top_k=top_k)
messages = [
("system", self.prompt_inject),
("human", "make the blog post in a format to be edited easily" )
]
response = agent_llm.invoke(messages)
# self.response = self.ollama_client.chat(model=model,
# messages=[
# {
# 'role': 'user',
# 'content': f'{self.prompt_inject}',
# },
# ])
#print ("draft")
#print (response)
return response.text()#['message']['content']
except Exception as e:
raise Exception(f"Failed to generate blog draft: {e}")
def get_draft_embeddings(self, draft_chunks):
'''Get embeddings for the draft chunks'''
embeds = self.ollama_client.embed(model=self.embed_model, input=draft_chunks)
return embeds.get('embeddings', [])
def id_generator(self, size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
def load_to_vector_db(self):
'''Load the generated blog drafts into a vector database'''
collection_name = f"blog_{self.title.lower().replace(" ", "_")}_{self.id_generator()}"
collection = self.chroma.get_or_create_collection(name=collection_name)#, metadata={"hnsw:space": "cosine"})
#if any(collection.name == collectionname for collectionname in self.chroma.list_collections()):
# self.chroma.delete_collection("blog_creator")
for model in self.agent_models:
print (f"Generating draft from {model} for load into vector database")
draft_chunks = self.split_into_chunks(self.generate_draft(model))
print(f"generating embeds")
embeds = self.get_draft_embeddings(draft_chunks)
ids = [model + str(i) for i in range(len(draft_chunks))]
chunknumber = list(range(len(draft_chunks)))
metadata = [{"model_agent": model} for index in chunknumber]
print(f'loading into collection')
collection.add(documents=draft_chunks, embeddings=embeds, ids=ids, metadatas=metadata)
return collection
def generate_markdown(self) -> str:
prompt_system = f"""
You are an editor taking information from {len(self.agent_models)} Software
Developers and Data experts
writing a 3000 word blog for other tech enthusiasts.
You like when they use almost no code examples and the
voice is in a light comedic tone. You are also Australian
As this person produce and an amalgamtion of this blog as a markdown document.
The title for the blog is {self.inner_title}.
Do not output the title in the markdown. Avoid repeated sentences
The basis for the content of the blog is:
{self.content}
""" """
try: try:
self.response = self.ollama_client.chat(model=self.ollama_model, query_embed = self.ollama_client.embed(model=self.embed_model, input=prompt_system)['embeddings']
messages=[ collection = self.load_to_vector_db()
{ collection_query = collection.query(query_embeddings=query_embed, n_results=100)
'role': 'user', print("Showing pertinent info from drafts used in final edited edition")
'content': f'{prompt}', pertinent_draft_info = '\n\n'.join(collection.query(query_embeddings=query_embed, n_results=100)['documents'][0])
}, #print(pertinent_draft_info)
]) prompt_human = f"Generate the final document using this information from the drafts: {pertinent_draft_info} - ONLY OUTPUT THE MARKDOWN"
return self.response['message']['content'] print("Generating final document")
messages = [("system", prompt_system), ("human", prompt_human),]
self.response = self.llm.invoke(messages).text()
# self.response = self.ollama_client.chat(model=self.ollama_model,
# messages=[
# {
# 'role': 'user',
# 'content': f'{prompt_enhanced}',
# },
# ])
#print ("Markdown Generated")
#print (self.response)
return self.response#['message']['content']
except Exception as e: except Exception as e:
raise Exception(f"Failed to generate markdown: {e}") raise Exception(f"Failed to generate markdown: {e}")
@ -42,3 +153,10 @@ class OllamaGenerator:
def save_to_file(self, filename: str) -> None: def save_to_file(self, filename: str) -> None:
with open(filename, "w") as f: with open(filename, "w") as f:
f.write(self.generate_markdown()) f.write(self.generate_markdown())
def generate_commit_message(self):
prompt_system = "You are a blog creator commiting a piece of content to a central git repo"
prompt_human = f"Generate a 5 word git commit message describing {self.response}"
messages = [("system", prompt_system), ("human", prompt_human),]
commit_message = self.llm.invoke(messages).text()
return commit_message

View File

@ -1,5 +1,7 @@
import ai_generators.ollama_md_generator as omg import ai_generators.ollama_md_generator as omg
import trilium.notes as tn import trilium.notes as tn
import repo_management.repo_manager as git_repo
import string,os
tril = tn.TrilumNotes() tril = tn.TrilumNotes()
@ -7,16 +9,26 @@ tril.get_new_notes()
tril_notes = tril.get_notes_content() tril_notes = tril.get_notes_content()
def convert_to_lowercase_with_underscores(string): def convert_to_lowercase_with_underscores(s):
return string.lower().replace(" ", "_") allowed = set(string.ascii_letters + string.digits + ' ')
filtered_string = ''.join(c for c in s if c in allowed)
return filtered_string.lower().replace(" ", "_")
for note in tril_notes: for note in tril_notes:
print(tril_notes[note]['title']) print(tril_notes[note]['title'])
# print(tril_notes[note]['content']) # print(tril_notes[note]['content'])
print("Generating Document") print("Generating Document")
ai_gen = omg.OllamaGenerator(tril_notes[note]['title'],
tril_notes[note]['content'],
"deepseek-r1:7b")
os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title']) os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title'])
ai_gen.save_to_file(f"/blog_creator/generated_files/{os_friendly_title}.md") ai_gen = omg.OllamaGenerator(os_friendly_title,
tril_notes[note]['content'],
tril_notes[note]['title'])
blog_path = f"/blog_creator/generated_files/{os_friendly_title}.md"
ai_gen.save_to_file(blog_path)
# Generate commit messages and push to repo
commit_message = ai_gen.generate_commit_message()
git_user = os.environ["GIT_USER"]
git_pass = os.environ["GIT_PASS"]
repo_manager = git_repo.GitRepository("blog/", git_user, git_pass)
repo_manager.create_copy_commit_push(blog_path, os_friendly_title, commit_message)

View File

@ -1,48 +0,0 @@
import os
import sys
from git import Repo
# Set these variables accordingly
REPO_OWNER = "your_repo_owner"
REPO_NAME = "your_repo_name"
def clone_repo(repo_url, branch="main"):
Repo.clone_from(repo_url, ".", branch=branch)
def create_markdown_file(file_name, content):
with open(f"{file_name}.md", "w") as f:
f.write(content)
def commit_and_push(file_name, message):
repo = Repo(".")
repo.index.add([f"{file_name}.md"])
repo.index.commit(message)
repo.remote().push()
def create_new_branch(branch_name):
repo = Repo(".")
repo.create_head(branch_name).checkout()
repo.head.reference.set_tracking_url(f"https://your_git_server/{REPO_OWNER}/{REPO_NAME}.git/{branch_name}")
repo.remote().push()
if __name__ == "__main__":
if len(sys.argv) < 3:
print("Usage: python push_markdown.py <repo_url> <markdown_file_name>")
sys.exit(1)
repo_url = sys.argv[1]
file_name = sys.argv[2]
# Clone the repository
clone_repo(repo_url)
# Create a new Markdown file with content
create_markdown_file(file_name, "Hello, World!\n")
# Commit and push changes to the main branch
commit_and_push(file_name, f"Add {file_name}.md")
# Create a new branch named after the Markdown file
create_new_branch(file_name)
print(f"Successfully created '{file_name}' branch with '{file_name}.md'.")

View File

@ -1,35 +1,102 @@
import os import os, shutil
from git import Git from urllib.parse import quote
from git.repo import BaseRepository from git import Repo
from git.exc import InvalidGitRepositoryError from git.exc import GitCommandError
from git.remote import RemoteAction
# Set the path to your blog repo here class GitRepository:
blog_repo = "/path/to/your/blog/repo" # This is designed to be transitory it will desctruvtively create the repo at repo_path
# if you have uncommited changes you can kiss them goodbye!
# Don't use the repo created by this function for dev -> its a tool!
# It is expected that when used you will add, commit, push, delete
def __init__(self, repo_path, username=None, password=None):
git_protocol = os.environ["GIT_PROTOCOL"]
git_remote = os.environ["GIT_REMOTE"]
#if username is not set we don't need parse to the url
if username==None or password == None:
remote = f"{git_protocol}://{git_remote}"
else:
# of course if it is we need to parse and escape it so that it
# can generate a url
git_user = quote(username)
git_password = quote(password)
remote = f"{git_protocol}://{git_user}:{git_password}@{git_remote}"
# Checkout a new branch and create a new file for our blog post if os.path.exists(repo_path):
branch_name = "new-post" shutil.rmtree(repo_path)
try: self.repo_path = repo_path
repo = Git(blog_repo) print("Cloning Repo")
repo.checkout("-b", branch_name, "origin/main") Repo.clone_from(remote, repo_path)
with open("my-blog-post.md", "w") as f: self.repo = Repo(repo_path)
f.write(content) self.username = username
except InvalidGitRepositoryError: self.password = password
# Handle repository errors gracefully
pass
# Add and commit the changes to Git def clone(self, remote_url, destination_path):
repo.add("my-blog-post.md") """Clone a Git repository with authentication"""
repo.commit("-m", "Added new blog post about DevOps best practices.") try:
self.repo.clone(remote_url, destination_path)
return True
except GitCommandError as e:
print(f"Cloning failed: {e}")
return False
# Push the changes to Git and create a PR def fetch(self, remote_name='origin', ref_name='main'):
repo.remote().push("refs/heads/{0}:refs/for/main".format(branch_name), "--set-upstream") """Fetch updates from a remote repository with authentication"""
base_branch = "origin/main" try:
target_branch = "main" self.repo.remotes[remote_name].fetch(ref_name=ref_name)
pr_title = "DevOps best practices" return True
try: except GitCommandError as e:
repo.create_head("{0}-{1}", base=base_branch, message="{}".format(pr_title)) print(f"Fetching failed: {e}")
except RemoteAction.GitExitStatus as e: return False
# Handle Git exit status errors gracefully
pass
def pull(self, remote_name='origin', ref_name='main'):
"""Pull updates from a remote repository with authentication"""
print("Pulling Latest Updates (if any)")
try:
self.repo.remotes[remote_name].pull(ref_name)
return True
except GitCommandError as e:
print(f"Pulling failed: {e}")
return False
def get_branches(self):
"""List all branches in the repository"""
return [branch.name for branch in self.repo.branches]
def create_and_switch_branch(self, branch_name, remote_name='origin', ref_name='main'):
"""Create a new branch in the repository with authentication."""
try:
print(f"Creating Branch {branch_name}")
# Use the same remote and ref as before
self.repo.git.branch(branch_name)
except GitCommandError:
print("Branch already exists switching")
# ensure remote commits are pulled into local
self.repo.git.checkout(branch_name)
def add_and_commit(self, message=None):
"""Add and commit changes to the repository."""
try:
print("Commiting latest draft")
# Add all changes
self.repo.git.add(all=True)
# Commit with the provided message or a default
if message is None:
commit_message = "Added and committed new content"
else:
commit_message = message
self.repo.git.commit(message=commit_message)
return True
except GitCommandError as e:
print(f"Commit failed: {e}")
return False
def create_copy_commit_push(self, file_path, title, commit_messge):
self.create_and_switch_branch(title)
self.pull(ref_name=title)
shutil.copy(f"{file_path}", f"{self.repo_path}src/content/")
self.add_and_commit(f"'{commit_messge}'")
self.repo.git.push()

View File

@ -18,9 +18,13 @@ class TrilumNotes:
print("Please run get_token and set your token") print("Please run get_token and set your token")
else: else:
self.ea = ETAPI(self.server_url, self.token) self.ea = ETAPI(self.server_url, self.token)
self.new_notes = None
self.note_content = None
def get_token(self): def get_token(self):
ea = ETAPI(self.server_url) ea = ETAPI(self.server_url)
if self.tril_pass == None:
raise ValueError("Trillium password can not be none")
token = ea.login(self.tril_pass) token = ea.login(self.tril_pass)
print(token) print(token)
print("I would recomend you update the env file with this tootsweet!") print("I would recomend you update the env file with this tootsweet!")
@ -40,10 +44,11 @@ class TrilumNotes:
def get_notes_content(self): def get_notes_content(self):
content_dict = {} content_dict = {}
if self.new_notes is None:
raise ValueError("How did you do this? new_notes is None!")
for note in self.new_notes['results']: for note in self.new_notes['results']:
content_dict[note['noteId']] = {"title" : f"{note['title']}", content_dict[note['noteId']] = {"title" : f"{note['title']}",
"content" : f"{self._get_content(note['noteId'])}" "content" : f"{self._get_content(note['noteId'])}"
} }
self.note_content = content_dict self.note_content = content_dict
return content_dict return content_dict