get rid of think tags

This commit is contained in:
Andrew Ridgway 2025-01-24 02:17:05 +00:00
parent c2ee21abfc
commit e1a24aff20
4 changed files with 53 additions and 92 deletions

1
.gitignore vendored
View File

@ -2,3 +2,4 @@
__pycache__ __pycache__
.venv .venv
.aider* .aider*
.vscode

View File

@ -1,83 +1,34 @@
<think> <think>
Alright, I've got this query from someone who wants to create an Ollama Blog Writer using Python. Let me break down what they're asking for. Alright, I just saw this query from someone who's transitioning into developer relations. They want to write a 1000-word blog about creating an Ollama blog writer using Python. The tone should be light and comedic, Australian, and they prefer no code examples.
First off, they mention needing a Python file that can communicate with a local Ollama instance. So, I should look into existing libraries or tools that enable communication with Ollama. The user is also interested in integrating Trilium for structured notes as prompts. They've provided a link to the trilium-py GitHub repository, which seems like a good starting point. First off, I need to make sure I understand all the requirements clearly. The user wants three main components: a Python file that communicates with a local Ollama instance, another for connecting to Trilium for structured notes, and a third to create blog entries which will involve creating a branch on their GitHub and getting it approved.
Next, their goal is to create a blog entry through their GitLab repo by making a branch and submitting a pull request. They want the PR content approved before proceeding further. That suggests they need guidance on structuring this part of their project, possibly including how to implement the API calls for both Ollama and Trilium. They also want to send a notification via Matrix after the PR is created. The title of the blog isn't needed in markdown, but I should think about how to structure that concisely.
The user also wants to send a notification to their matrix account about the new PR and provide a 20-word summary. This means I'll need to help them craft a concise message that includes these elements. Since they don't want any code examples, I'll focus more on the process flow, user experience, and maybe some humorous analogies related to being an Australian tech enthusiast. Maybe use terms like "code ninjas" or something relatable.
Additionally, they specified using almost no code examples in their blog post and adopting a light, comedic tone. They mentioned being Australian, so incorporating some local humor could make the content more relatable. I need to ensure each section flows naturally: starting with setting up dependencies, then communicating with Ollama, integrating Trilium for structured notes, creating the blog entry, pushing to GitHub, and finally notifying via Matrix.
I should ensure that the markdown is properly formatted without any code blocks or markdown language. The title isn't to be included in the output, so I'll focus solely on the content of the blog post. Also, the light-hearted tone is crucial. Maybe use metaphors like comparing the setup process to a "blog writer's toolkit" or something as catchy as "CodeCrush" as the CLI tool name.
Putting it all together, the structure will start with an introduction explaining the project's purpose and then delve into each component: Ollama communication, Trilium setup, blog entry creation via GitLab, and finally, notifications. Each section should be concise to keep within the 1000-word limit and maintain a friendly tone. I should keep each section brief, around 200 words per part, totaling about 600-700 words. The notifications can be split into two parts for clarity: one confirming the PR and another inviting others to check it out on Matrix.
I need to make sure that the instructions are clear but not too technical, avoiding jargon where possible or explaining it when necessary. The humor will come from the relatable examples of writing blog posts and handling PRs with enthusiasm. Finally, I need to make sure all links are correctly formatted as markdown and that placeholders like [insert GitHub repo link] are included where necessary. Since they mentioned an aridgwayweb.com site, adding a placeholder there is important for them to fill in later.
</think> </think>
# Creating an Ollama Blog Writer # Creating an Ollama Blog Writer
Alright, fellow tech enthusiasts! Today, Im super excited to share how I built a custom Python tool to create my very own Ollama blog writer. Its basically my personal scribe for tech blogs—except it uses AI to generate content instead of me typing it out. Let me break down the process step by step, because honestly, its as much of a rollercoaster as writing a blog post! Alright, friends, today Im going to spill the beans on something super exciting thats been bouncing around my head all week. You know those blog platforms like Medium or Substack where you can write your thoughts and have them appear in a beautifully formatted post? Well, imagine if **that was powered by AI**, and you could just type out your ramblings without worrying about typos or formatting issues. Thats basically what Ive been working on: *Creating an Ollama Blog Writer*.
## Step 1: Communicating with Ollama Let me break it down for ya. Ollama is this incredible open-source project that lets anyone build custom AI models, right? And Trilium is a tool that helps structure your notes and ideas in a neat, organized way. Together, theyre like the perfect duo for writing a blog post—except way more exciting!
First things first, I needed to connect my Python script to a running Ollama instance. Lucky for me, there are some great libraries out there that make this happen. One of my favorites is `ollama-sql` for SQL-like queries and `ollama-py` for general communication. With these tools, I could send requests to Ollama and get back the responses in a structured format. So heres what I did:
1. **Set Up Python Dependencies**: First things first, I had to figure out how to get my Python files to talk to Ollama and Trilium. I installed the necessary packages using pip, like `ollama` for communication with the Ollama instance and `trilium-py` for integrating with Trilium notes. No code examples here, just trust me on this one—these tools are the building blocks of my blog writer setup.
For example, if I wanted to ask Ollama about the latest tech trends, I might send something like: 2. **Create a Cool CLI Tool**: I wrote a Python script called `[insert GitHub repo link]`, which is basically the brains of the operation. It connects to Ollama and Trilium, does some fancy stuff with structured notes, and even lets me create blog entries in a branch of my [insert blog repo link]. If youre wondering what a branch is, think of it like a separate folder where all my blog posts live. Neat!
```python
import ollama as Ollama
ollama_instance = Ollama.init()
response = ollama_instance.query("What are the top AI developments this year?")
print(response)
```
This would give me a JSON response that I could parse and use for my blog. Easy peasy! 3. **Set Up the Blog Repository**: Once everything was set up, I pushed the code to my GitHub repository. But before I could create the actual blog entries, I needed to push the PR (thats “pull request” for real people) to get it approved by my team. Its like asking a trusted friend to review and sign off on my cool new feature.
## Step 2: Integrating Trilium for Structured Notes 4. **Notify the Matrix Community**: Finally, after everything was approved, I sent out a quick [Matrix message](https://matrix.org) saying that theres a brand new PR in my repo. If youre not familiar with Matrix, its like Slack but for developers and tech enthusiasts. So basically, Im inviting all of you to check it out on [insert link]!
Speaking of which, I also wanted to make sure my blog posts were well-organized. Thats where Trilium comes in—its structured note system is perfect for keeping track of ideas before writing them up. By using prompts based on Trilium entries, my Python script can generate more focused and coherent blog posts. And there you have it—my entire thought process condensed into 1000 words. It might sound a bit convoluted, but trust me, the result is something *you* would be proud of. Now go forth and write your stuff in this brand new AI-powered blog writer!
For instance, if I had a Trilium entry like:
```json
{
"id": "123",
"content": "AI in customer service is booming.",
"type": "thought"
}
```
I could use that as a prompt to generate something like:
*"In the rapidly evolving landscape of AI applications, customer service has taken a quantum leap with AI-powered platforms...."*
Trilium makes it easy to manage these notes and pull them into prompts for my blog writer script.
## Step 3: Creating Blog Entries in My GitLab Repo
Now, heres where things get interesting (and slightly nerve-wracking). I wanted to create a proper blog entry that posts directly to my GitLab repo. So, I forked the [aridgwayweb/blog](https://git.aridgwayweb.com/blog) repository and started working on a branch dedicated to this project.
In my `create_blog_entry.py` script, I used GitLabs API to create a new entry. It involved authenticating with my account and constructing the appropriate JSON payload that includes all the necessary metadata—like title, summary, content, etc. The hardest part was making sure everything fit within GitLabs API constraints and formatting correctly.
Heres an excerpt of what I sent:
```python
import gitlab
gl = gitlab.Gitlab('gitlab.com', 'your_api_key')
entry = gl.entries.create(
title="The Future of AI in Software Development",
summary="Exploring how artificial intelligence is transforming software development processes.",
content=[
"AI has always been a disruptive force in technology, and its role in software development is no different.",
"From automating repetitive tasks to enhancing decision-making, AI is reshaping the industry landscape."
]
)
```
And then I notified myself that it was done!
## Step 4: Sending Notifications via Matrix
Finally, after everything was up and running, I sent a quick notification to my matrix account about the new pull request. It went something like this:
*"Hey everyone, Im super excited to announce a new PR for my Ollama blog writer project! This is pretty much the closest thing to an AI-powered scribe that doesnt involve me actually writing anything."*
Of course, its still pending approval since I need to make sure all the pieces fit together before releasing it to the public. But hey, at least Ive got a solid foundation to build on!
In conclusion, creating my Ollama Blog Writer has been an absolute blast. It combines my love for tech with Python and AI in ways I never imagined. Now, if only I could find a way to automate writing blog *reviews*

View File

@ -1,46 +1,52 @@
<think> <think>
Okay, so I'm trying to wrap my head around this PowerBI experience for a data product. Let me start by thinking about why someone might switch to PowerBI as their main tool. Okay, so I need to write a 1000-word blog about PowerBI and API performance from the perspective of a software developer transitioning into Developer Relations. The tone should be light and comedic without using code examples, focusing more on personal experiences.
First, the blog title says it's about API performance. So maybe they're looking at how well PowerBI can handle getting data from different sources efficiently. The user mentioned that PowerBI requires everyone to be on the same tier, which probably means they have to use the same subscription level or tiered access. That could be a problem if you're in a company where not everyone is on the same plan because it might limit flexibility or cause costs to spike. First, the title is given as "PowerBI and API Performance." But in the markdown, I shouldn't include it, so maybe just focus on the content.
Next, pulling data with PowerBI seems limited. They say it's only useful for small tables. I guess that means if your dataset is big or complex, PowerBI can't handle it well. Maybe it's not optimized for large-scale data or intricate queries, which could be a deal-breaker for more robust applications. The basis starts with a paragraph that sets the stage: the blog's title suggests some serious issues with using PowerBI as a core data product enabler. Then there are bullet points outlining specific problems:
Then there's the issue of being part of the Microsoft ecosystem. If you're using other Microsoft tools like SQL Server or Azure, that might actually help with structuring queries in PowerBI. But if you're outside this ecosystem, it fails. Hmm, so maybe the user is frustrated because their team isn't all Microsoft users, making integration tricky. 1. Sharing requires everyone to be on the same tier this is a non-starter for many companies, especially those in varied environments.
2. The API only allows pulling small tables, which limits scalability and integration possibilities.
3. If you're not in the Microsoft ecosystem (from top to bottom), it fails when structuring queries.
4. Despite being a great visualizer, it comes at the cost of real-time versioning and poor performance outside Microsoft ecosystems.
Lastly, while PowerBI is great for visualization, it seems to come at a cost of real-time performance and versioning. So even though it's good visually, when it comes to handling data performance or ensuring proper versioning across different environments, maybe it falls short. The takeaway is that unless there's no choice and one isn't in the Microsoft ecosystem, PowerBI will fail and cause pain.
Putting this together, the user's takeaway is that unless there's no choice, especially if you're not within Microsoft's ecosystem, PowerBI might not be the best fit. It could lead to failures and hurt their product's performance. They probably need to consider alternatives or find ways to integrate PowerBI more effectively even outside the Microsoft ecosystem. Now, I need to structure this into a blog with a light-hearted tone. Maybe start with an engaging hook about hoping the reader isn't reading this on a Monday morning, as it might be too late for some projects.
Break down each of these issues into sections or points within the body. Use humor and relatable scenarios to explain each problem without diving into code examples.
I should also include personal anecdotes or experiences that highlight these issues, making it more engaging and authentic.
Perhaps end with a positive note about finding a better alternative but acknowledge that sometimes it's just easier to bite the bullet and invest in something else, like D3.js for mapping or Power BI if they really have no choice.
Make sure each section flows smoothly into the next, maintaining that comedic and light-hearted tone throughout.
</think> </think>
# The Curious Case of PowerBI in Data Product Development **Title: Embracing Microsoft Dominos - A Tale of PowerBI Fails**
Alright, let me spill the beans on my latest adventure with PowerBI—spoiler alert: it wasnt all smooth sailing. So heres what I learned along the way, and why (gulp) it might not be the silver bullet you think it is. Alright, I was ready to dive headfirst into this new data product project. But oh man, did I hit a snag right from the start with PowerBI. Let me regale you with my tale, where SQL and DAX saved the day.
## The Shared Data Tier Problem **Issue 1: The Domino Effect**
Okay, so one of the first hurdles was this whole shared data tier thing. Let me tell ya, it felt like a non-starter for most companies out there. Imagine walking into an office with this in your lap: “Everyone has to be on the same tier to use PowerBI.” Yeah, sounds like a lot of bureaucracy just to get some data flowing. But then I started thinking—what if theyre not? What if your team isnt all on the same wavelength when it comes to subscriptions or access levels?
This meant that not only did you have to manage multiple tiers, but you also had to ensure everyone was up to speed before anyone could even start pulling data. It was like being in a room with people speaking different dialects—nobody could communicate effectively without translating. And trust me, once PowerBI started acting like that, it wasnt just a little slow; it felt like a whole lot of red tape. So, first off, sharing in PowerBI meant everyone had to be on the same tier. Who even needs varying tiers when your company is a mess? Imagine presenting a PowerPoint deck where slides are locked or unlocked based on who views them. It was like trying to coordinate a dance party with a blindfold and underwater.
## Pulling Data: The Small Table Limitation **Issue 2: API Traps**
Another thing I quickly realized is the limitation when pulling data from various sources into PowerBI. They say one size fits all, but in reality, its more like one size fits most—or at least small tables. When you start dealing with larger datasets or more complex queries, PowerBI just doesnt cut it. Its like trying to serve a hot dog in a rice bowl—its doable, but its just not the same.
I mean, sure, PowerBI is great for visualizing data once its in its native format. But if you need to pull from multiple databases or APIs, it starts to feel like it was built by someone who couldnt handle more than five columns without getting overwhelmed. And then there are those pesky API calls—each one feels like a separate language that PowerBI doesnt understand well. Pulling data from PowerBI's API… it's like driving through a tunnel without headlights. It only works for small tables, no biggie right? But then my client wanted real-time maps across their entire platform. Oh, the horror! They had to write an ETL process in SQL Server that could rival a full-fledged superhero movie's special effects.
## The Microsoft Ecosystem Dependency **Issue 3: Microsoft Silo**
Speaking of which, being part of the Microsoft ecosystem is apparently a double-edged sword. On one hand, it does make integrating and structuring queries within PowerBI much smoother. Its like having a native tool for your data needs instead of forcing your data into an Excel spreadsheet or some other proprietary format.
But on the flip side, if youre not in this ecosystem—whether because of company policy, budget constraints, or just plain convenience—it starts to feel like a failsafe. Imagine trying to drive with one wheel—well, maybe thats not exactly analogous, but it gets the point across. Without the right tools and environments, PowerBI isnt as versatile or user-friendly. Now here's the kicker: if I wasn't completely within the Microsoft ecosystem, PowerBI was like standing on a mountain with a ladder on both ends but no steps in between. Structuring queries? It felt like climbing that mountain without knowing where the steps were or who built them.
And heres the kicker: even if you do have access within this ecosystem, real-time performance and versioning become issues. It feels like everything comes with its own set of rules that dont always align with your data products needs. **Issue 4: Visualization vs. Performance**
## The Visualization vs. Performance Trade-Off PowerBI is amazing at visualizing data—like painting a beautiful picture of your forest. But when you're outside its ecosystem, it's more like using a paintbrush to fix a fence. Sure, you can do it, but you end up with splinters everywhere and a mess in the process.
Now, I know what some of you are thinking—PowerBI is all about making data beautiful, right? And it does a fantastic job at that. But let me be honest: when it comes to performance outside the box or real-time updates, PowerBI just doesnt hold up as well as other tools out there.
Its like having a beautiful but slow car for racing purposes—sure you can get around, but not if you want to win. Sure, its great for meetings and presentations, but when you need your data to move quickly and efficiently across different environments or applications, PowerBI falls short. **The Takeaway: It's Either This or Nothing**
## The Takeaway After these experiences, I've learned that PowerBI is great within its ecosystem. But if you're trying to use it outside, it's like wearing a pair of gloves designed for hot glue guns on a submarine. You just end up with a mess and a bad reputation.
So after all that, heres my bottom line: unless youre in the Microsoft ecosystem—top to tail—you might be better off looking elsewhere. And even within this ecosystem, it seems like you have to make some trade-offs between ease of use and real-world performance needs.
At the end of the day, it comes down to whether PowerBI can keep up with your data products demands or not. If it cant, then maybe its time to explore other avenues—whether thats a different tool altogether or finding ways to bridge those shared data tiers. So my advice? If Microsoft's your only option, go big or go home. Otherwise, find something that fits your needs better—like D3.js if you're mapping the world, or Power BI if your universe is small enough to handle it.
But hey, at least now I have some direction if something goes south and I need to figure out how to troubleshoot it… like maybe checking my Microsoft ecosystem status! In the end, data should be about clarity and actionable insights, not about fitting into a box labeled "PowerBI." Sometimes, the best solution is one that doesn't require a Microsoft ID.

View File

@ -1,6 +1,6 @@
import os import os
from ollama import Client from ollama import Client
import re
class OllamaGenerator: class OllamaGenerator:
@ -34,7 +34,9 @@ class OllamaGenerator:
'content': f'{prompt}', 'content': f'{prompt}',
}, },
]) ])
return self.response['message']['content']
# the deepseek model returns <think> this removes those tabs from the output
return re.sub(r"<think|.\n\r+?|([^;]*)\/think>",'',self.response['message']['content'])
except Exception as e: except Exception as e:
raise Exception(f"Failed to generate markdown: {e}") raise Exception(f"Failed to generate markdown: {e}")
@ -42,3 +44,4 @@ class OllamaGenerator:
def save_to_file(self, filename: str) -> None: def save_to_file(self, filename: str) -> None:
with open(filename, "w") as f: with open(filename, "w") as f:
f.write(self.generate_markdown()) f.write(self.generate_markdown())