fixing more merge conflicts

This commit is contained in:
armistace 2025-05-30 15:40:42 +10:00
parent 9e9ac7b99d
commit d91d82b281
6 changed files with 17 additions and 132 deletions

View File

@ -1,29 +0,0 @@
```markdown
# Creating an Ollama Blog Writer: A Hilariously Tedious Adventure
Hey tech enthusiasts! 👋 Im back with another installment of my tech journey, but this time its personal. I decided to create a Python script that not only writes blogs for me (please dont tell my boss), but also uses Ollama for some AI-assisted content creation and connects with Trilium for structured note-taking. Lets dive into the details!
### Step 1: Get Your Ollama On
First things first, I needed a Python file that could talk to my local Ollama instance. If you haven't heard of Ollama, it's like a tiny llama in your terminal that helps with text generation. It took me a while to figure out how to configure the `.env` file and set up the connection properly. But once I did, I was off to a running start!
### Step 2: Connecting Trilium for Structured Notes
For this part, I used a Python library called `trilium-py` (because why not?). It's like having a brain that can store and retrieve information in an organized way. To make sure my notes are super structured, I had to find the right prompts and ensure they were fed into Ollama correctly. This part was more about figuring out how to structure the data than actually coding—but hey, its all part of the fun!
### Step 3: Automating the Blog Creation
Now that I have my notes and AI-generated content sorted, it was time to automate the blog creation process. Heres where things got a bit Git-y (yes, I made up that word). I wrote a script that would create a new branch in our company's blog repo, push the changes, and voilà—a PR! Just like that, my humble contributions were ready for review by the big boss.
### Step 4: Sending Notifications to Matrix
Finally, as any good DevRel should do, I sent out a notification to our internal Matrix channel. Its like Slack but with more tech talk and less memes about dogs in hats. The message was short and sweet—just a summary of the blog changes and a request for feedback. Hey, if Elon can tweet at Tesla shareholders, why not send a quick matrix message?
### Wrapping Up
Creating this Ollama Blog Writer wasnt just about writing better blogs (though that would be nice). It was about embracing the joy of automation and the occasional struggle to get things working right. I learned a lot about Python libraries, local server configurations, and how to communicate effectively with my team via Matrix.
So there you have it—a step-by-step guide on how not to write blogs but definitely how to automate the process. If youre into tech, automation, or just want to laugh at someone elses coding mishaps, this blog is for you!
Keep on hacking (and automating), [Your Name]
```

View File

@ -1,23 +0,0 @@
Title: When Data Visualization Meets Frustration: A Comic Take on PowerBI's API Woes
---
In the ever-evolving world of data and tech, few tools hold as much promise—or frustration—as Microsoft's PowerBI. Its sleek interface, intuitive visuals, and promise to simplify data into digestible insights have made it a favorite among many. But beneath its polished surface lies a storm of challenges that can leave even the most seasoned developers in its dust.
Imagine this: you've spent hours refining your data model, only to find that your team's hierarchy resists your attempt to share sensitive information without breaking hearts. "We're all on different tiers," you mutter, your frustration evident. But here's the kicker—PowerBI won't even let everyone in your company join the party if they're not up to tier 5. And guess what? Most companies operate in reality tier 3 at best. So, step one: API calls to PowerBI. You'd think pulling data would be straightforward, but oh, how it pulls you into a tailspin.
Here's where things get interesting: PowerBI APIs are mostly limited to small tables. It's like trying to fit furniture through a door that's slightly too narrow—it just doesn't work unless you have a magic wand (or in this case, an API upgrade). Imagine needing to fetch data from three different on-premises databases seamlessly; PowerBI might just give you the finger.
Now, if your company happens to be in the Microsoft ecosystem—like the Azure universe—then maybe things are a bit easier. But here's the kicker: it's not being top-to-bottom within that ecosystem that counts as success. If even one part is outside, you're facing performance issues akin to driving through a snowstorm without an umbrella. You get the picture.
So what does this mean for the average user? Unless you've got no choice but to use PowerBI... well, let's just say it might not be your friend in such scenarios. It's like having a GPS that only works if you're willing to drive on a dirt road and expect it to guide you through with zero warnings—sounds great until you end up stranded.
But wait, maybe there's silver lining. Other tools have learned the hard lessons PowerBI has taught us. They allow APIs beyond just small tables and handle ecosystems with ease, making them more versatile for real-world applications. It's like upgrading your car's GPS to one that not only knows all the roads but also can navigate through different weather conditions without complaints.
In conclusion, while PowerBI is undeniably a powerful tool when used correctly—like driving in calm weather on perfectly paved roads—it has its limitations. Its API restrictions and ecosystem integration issues make it less than ideal for many real-world scenarios. So unless you're in a controlled environment where these issues don't arise, maybe it's time to explore other options that can handle the data journey with more grace.
After all, Data Overload isn't just a Star Trek term—it could be your reality if you're not careful with PowerBI.
---
*So, is PowerBI still your best friend in this complex tech world? Or are there better tools out there waiting to be discovered? Share your thoughts and experiences below!*

View File

@ -1,35 +0,0 @@
# Wrangling Data: A Reality Check
Okay, lets be honest. Data wrangling isn't glamorous. Its not a sleek, automated process of magically transforming chaos into insights. Its a messy, frustrating, and surprisingly human endeavor. Lets break down the usual suspects the steps we take to get even a vaguely useful dataset, and why theyre often a monumental task.
**Phase 1: The Hunt**
First, youre handed a dataset. Lets call it “Customer_Data_v2”. Its… somewhere. Maybe a CSV file, maybe a database table, maybe a collection of spreadsheets that havent been updated since 2008. Finding it is half the battle. It's like searching for a decent cup of coffee in Melbourne you know its out there, but its often hidden behind a wall of bureaucracy.
**Phase 2: Deciphering the Ancient Texts**
Once you *find* it, you start learning what it *means*. This is where things get… interesting. Youre trying to understand what fields represent, what units of measurement are used, and why certain columns have bizarre names (seriously, “Customer_ID_v3”?). It takes x amount of time (depends on the industry, right?). One week for a small bakery, six months for a multinational insurance company. Its a wild ride.
Youll spend a lot of time trying to understand the business context. "CRMs" for Customer Relationship Management? Seriously? Its a constant stream of jargon and acronyms that make your head spin.
**Phase 3: The Schema Struggle**
Then theres the schema. Oh, the schema. It takes a couple of weeks to learn the schema. Its like deciphering ancient hieroglyphics, except instead of predicting the rise and fall of empires, youre trying to understand why a field called “Customer_ID_v3” exists. Its a puzzle, and a frustrating one at that.
**Phase 4: The Tooling Tango**
Youll wrestle with the tools. SQL interpreters, data transformation software theyre all there, but theyre often clunky, outdated, and require a surprising amount of manual effort. It's like finding a decent cup of coffee in Melbourne you know its out there, but its often hidden behind a wall of bureaucracy.
**Phase 5: The Reporting Revelation (and Despair)**
Finally, you get to the reporting tool. And cry. Seriously, who actually *likes* this part? Its a soul-crushing exercise in formatting and filtering, and the output is usually something that nobody actually reads.
**The AI Factor A Realistic Perspective**
Now, everyones talking about AI. And, look, Im not saying AI is a bad thing. Its got potential. But lets be realistic. This will for quite some time be the point where we need people. AI can automate the process of extracting data from a spreadsheet. But it cant understand *why* that spreadsheet was created in the first place. It cant understand the context, the assumptions, the biases. It cant tell you if the data is actually useful.
We can use tools like datahub to capture some of this business knowledge but those tool are only as good as the people who use them. We need to make sure AI is used for those uniform parts schema discovery, finding the tools, ugh reporting. But where the rubber hits the road… thats where we need people and that we are making sure that there is a person interpreting not only what goes out.. but what goes in.
**The Bottom Line**
Its a bit like trying to build a great BBQ. You can buy the fanciest gadgets and the most expensive wood, but if you dont know how to cook, youre going to end up with a burnt mess. So, lets not get carried away with the hype. Lets focus on building a data culture that values human intelligence, critical thinking, and a good dose of common sense. And lets keep wrangling. Because, lets be honest, someones gotta do it.

View File

@ -1,45 +0,0 @@
# When to Use AI: Navigating the Right Moments for Machine Learning and Beyond
In today's tech landscape, the question "When should we use AI?" is as common as it is critical. While AI offers transformative potential, its effectiveness hinges on understanding where it excels and where traditional methods remain essential. Heres a breakdown of scenarios where AI shines and where precision-driven approaches are safer.
### AIs Sweet Spot: Where Humans Fail
1. **Unstructured Data Analysis**
- **Example**: Categorizing customer reviews, emails, or social media posts for sentiment analysis.
- **Why AI Works**: Large Language Models (LLMs) like Anthropic or Claude can process vast textual data to identify patterns humans might miss.
2. **Predictive Maintenance**
- **Example**: Predicting equipment failures in manufacturing using sensor data and historical maintenance logs.
- **Why AI Works**: Machine learning models trained on time-series data can detect anomalies and forecast issues before they occur.
3. **Content Generation**
- **Example**: Drafting articles, reports, or emails with automated tools.
- **Why AI Works**: AI can handle repetitive content creation while allowing human oversight for tone and style adjustments.
### Where AI Falls Short: Precision Over Flexibility
1. **Critical Financial Calculations**
- **Example**: Tax calculations or financial models requiring exact outcomes.
- **Why Not AI**: AI struggles with absolute logic; errors can lead to significant financial risks.
2. **Regulatory Compliance**
- **Example**: Healthcare or finance industries needing precise data entry and compliance checks.
- **Why Not AI**: AI might misinterpret rules, leading to legal issues.
3. **Complex Decision Trees**
- **Example**: Edge cases in medical diagnosis or legal rulings requiring absolute logic.
- **Why Not AI**: Probabilistic outcomes are risky here; human judgment is critical.
### Hybrid Approaches for Success
- **Data Collection & Initial Analysis**: Use AI to gather insights from unstructured data.
- **Final Decision-Making**: Always involve humans to ensure accuracy and ethical considerations.
**Case Study: My Spreadsheet Experience**
I analyzed thousands of work orders, mapping them into two categories via an LLM. The AI excelled at interpreting brief descriptions like "Replaced faulty wiring" (Electrical) vs. "Fixed AC unit" (Plumbing). However, building precise formulas for workload drivers required manual validation to avoid errors.
### Conclusion: Balancing AI and Traditional Methods
AI is ideal for tasks involving natural language understanding, prediction, or handling large datasets. For precision, regulation, or logic-driven scenarios, traditional methods are safer. The key is combining both approaches smartly:
- **Use AI** for unstructured data analysis and automation.
- **Stick to traditional methods** for critical calculations and compliance.
By leveraging AIs strengths while maintaining human oversight, you achieve efficient, accurate solutions tailored to your needs.

View File

@ -3,6 +3,7 @@ from ollama import Client
import chromadb import chromadb
from langchain_ollama import ChatOllama from langchain_ollama import ChatOllama
class OllamaGenerator: class OllamaGenerator:
def __init__(self, title: str, content: str, inner_title: str): def __init__(self, title: str, content: str, inner_title: str):
@ -125,6 +126,7 @@ class OllamaGenerator:
{self.content} {self.content}
""" """
try: try:
<<<<<<< HEAD
query_embed = self.ollama_client.embed(model=self.embed_model, input=prompt_system)['embeddings'] query_embed = self.ollama_client.embed(model=self.embed_model, input=prompt_system)['embeddings']
collection = self.load_to_vector_db() collection = self.load_to_vector_db()
collection_query = collection.query(query_embeddings=query_embed, n_results=100) collection_query = collection.query(query_embeddings=query_embed, n_results=100)
@ -145,6 +147,18 @@ class OllamaGenerator:
#print ("Markdown Generated") #print ("Markdown Generated")
#print (self.response) #print (self.response)
return self.response#['message']['content'] return self.response#['message']['content']
=======
self.response = self.ollama_client.chat(model=self.ollama_model,
messages=[
{
'role': 'user',
'content': f'{prompt}',
},
])
# the deepseek model returns <think> this removes those tabs from the output
return re.sub(r"<think|.\n\r+?|([^;]*)\/think>",'',self.response['message']['content'])
>>>>>>> e1a24af (get rid of think tags)
except Exception as e: except Exception as e:
raise Exception(f"Failed to generate markdown: {e}") raise Exception(f"Failed to generate markdown: {e}")
@ -153,9 +167,12 @@ class OllamaGenerator:
with open(filename, "w") as f: with open(filename, "w") as f:
f.write(self.generate_markdown()) f.write(self.generate_markdown())
<<<<<<< HEAD
def generate_commit_message(self): def generate_commit_message(self):
prompt_system = "You are a blog creator commiting a piece of content to a central git repo" prompt_system = "You are a blog creator commiting a piece of content to a central git repo"
prompt_human = f"Generate a 5 word git commit message describing {self.response}" prompt_human = f"Generate a 5 word git commit message describing {self.response}"
messages = [("system", prompt_system), ("human", prompt_human),] messages = [("system", prompt_system), ("human", prompt_human),]
commit_message = self.llm.invoke(messages).text() commit_message = self.llm.invoke(messages).text()
return commit_message return commit_message
=======
>>>>>>> e1a24af (get rid of think tags)