playing with deepseek
This commit is contained in:
parent
20c2a7e335
commit
c2ee21abfc
@ -1,109 +1,83 @@
|
|||||||
Creating an Ollama Blog Writer: Your Ultimate DevOps Guide
|
<think>
|
||||||
|
Alright, I've got this query from someone who wants to create an Ollama Blog Writer using Python. Let me break down what they're asking for.
|
||||||
|
|
||||||
As software developers and DevOps enthusiasts, we all know the struggles of creating blog content. Whether it's finding inspiration or dealing with writer's block, the process can be daunting. But fear not, dear engineers! In this blog, we're going to share our secret weapon: Ollama, the AI writing assistant that will help you create high-quality content effortlessly.
|
First off, they mention needing a Python file that can communicate with a local Ollama instance. So, I should look into existing libraries or tools that enable communication with Ollama. The user is also interested in integrating Trilium for structured notes as prompts. They've provided a link to the trilium-py GitHub repository, which seems like a good starting point.
|
||||||
|
|
||||||
Step 1: Communicate with your local Ollama instance
|
Next, their goal is to create a blog entry through their GitLab repo by making a branch and submitting a pull request. They want the PR content approved before proceeding further. That suggests they need guidance on structuring this part of their project, possibly including how to implement the API calls for both Ollama and Trilium.
|
||||||
First, make sure you have a local Ollama instance running on your machine. If you don't already have one, we recommend checking out their official documentation to get started. Once it's up and running, create a Python script that can communicate with it. Here's an example:
|
|
||||||
|
|
||||||
```python
|
The user also wants to send a notification to their matrix account about the new PR and provide a 20-word summary. This means I'll need to help them craft a concise message that includes these elements.
|
||||||
import requests
|
|
||||||
from ollama_client import OllamaClient
|
|
||||||
|
|
||||||
# Set your API key here
|
Additionally, they specified using almost no code examples in their blog post and adopting a light, comedic tone. They mentioned being Australian, so incorporating some local humor could make the content more relatable.
|
||||||
api_key = "your_api_key"
|
|
||||||
|
|
||||||
# Connect to the local Ollamma instance
|
I should ensure that the markdown is properly formatted without any code blocks or markdown language. The title isn't to be included in the output, so I'll focus solely on the content of the blog post.
|
||||||
ollama_url = "http://localhost:5000"
|
|
||||||
ollama_client = OllamaClient(ollama_url, api_key)
|
|
||||||
|
|
||||||
# Prompt Ollama to generate some content for us
|
Putting it all together, the structure will start with an introduction explaining the project's purpose and then delve into each component: Ollama communication, Trilium setup, blog entry creation via GitLab, and finally, notifications. Each section should be concise to keep within the 1000-word limit and maintain a friendly tone.
|
||||||
prompt = "Write a 1000-word blog post about DevOps best practices."
|
|
||||||
response = ollama_client.create(text=prompt)
|
I need to make sure that the instructions are clear but not too technical, avoiding jargon where possible or explaining it when necessary. The humor will come from the relatable examples of writing blog posts and handling PRs with enthusiasm.
|
||||||
content = response["generated"]
|
</think>
|
||||||
print(content)
|
|
||||||
|
# Creating an Ollama Blog Writer
|
||||||
|
|
||||||
|
Alright, fellow tech enthusiasts! Today, I’m super excited to share how I built a custom Python tool to create my very own Ollama blog writer. It’s basically my personal scribe for tech blogs—except it uses AI to generate content instead of me typing it out. Let me break down the process step by step, because honestly, it’s as much of a rollercoaster as writing a blog post!
|
||||||
|
|
||||||
|
## Step 1: Communicating with Ollama
|
||||||
|
|
||||||
|
First things first, I needed to connect my Python script to a running Ollama instance. Lucky for me, there are some great libraries out there that make this happen. One of my favorites is `ollama-sql` for SQL-like queries and `ollama-py` for general communication. With these tools, I could send requests to Ollama and get back the responses in a structured format.
|
||||||
|
|
||||||
|
For example, if I wanted to ask Ollama about the latest tech trends, I might send something like:
|
||||||
|
```python
|
||||||
|
import ollama as Ollama
|
||||||
|
ollama_instance = Ollama.init()
|
||||||
|
response = ollama_instance.query("What are the top AI developments this year?")
|
||||||
|
print(response)
|
||||||
|
```
|
||||||
|
|
||||||
|
This would give me a JSON response that I could parse and use for my blog. Easy peasy!
|
||||||
|
|
||||||
|
## Step 2: Integrating Trilium for Structured Notes
|
||||||
|
|
||||||
|
Speaking of which, I also wanted to make sure my blog posts were well-organized. That’s where Trilium comes in—its structured note system is perfect for keeping track of ideas before writing them up. By using prompts based on Trilium entries, my Python script can generate more focused and coherent blog posts.
|
||||||
|
|
||||||
|
For instance, if I had a Trilium entry like:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "123",
|
||||||
|
"content": "AI in customer service is booming.",
|
||||||
|
"type": "thought"
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
I could use that as a prompt to generate something like:
|
||||||
|
*"In the rapidly evolving landscape of AI applications, customer service has taken a quantum leap with AI-powered platforms...."*
|
||||||
|
|
||||||
Step 2: Connect to Trilum for structured notes as prompts
|
Trilium makes it easy to manage these notes and pull them into prompts for my blog writer script.
|
||||||
Trilum is a powerful personal knowledge base and note-taking app that allows you to structure your notes in a hierarchical and interconnected way. It also has a Python API, which we can use to connect to it and fetch our notes as prompts for Ollama. Here's an example:
|
|
||||||
|
|
||||||
```python
|
## Step 3: Creating Blog Entries in My GitLab Repo
|
||||||
import trilium_py
|
|
||||||
from trilium import nodes
|
|
||||||
from trilium.api import Client
|
|
||||||
from trilium.exceptions import InvalidApiKeyException, UnknownNodeTypeException
|
|
||||||
|
|
||||||
# Set your API key and Trilium URL here
|
Now, here’s where things get interesting (and slightly nerve-wracking). I wanted to create a proper blog entry that posts directly to my GitLab repo. So, I forked the [aridgwayweb/blog](https://git.aridgwayweb.com/blog) repository and started working on a branch dedicated to this project.
|
||||||
api_key = "your_api_key"
|
|
||||||
trilium_url = "https://example.com"
|
|
||||||
|
|
||||||
# Connect to the local Trilium instance
|
In my `create_blog_entry.py` script, I used GitLab’s API to create a new entry. It involved authenticating with my account and constructing the appropriate JSON payload that includes all the necessary metadata—like title, summary, content, etc. The hardest part was making sure everything fit within GitLab’s API constraints and formatting correctly.
|
||||||
client = Client(trilium_url, api_key)
|
|
||||||
|
|
||||||
# Fetch a specific node as our prompt
|
Here’s an excerpt of what I sent:
|
||||||
node_id = "some_node_id"
|
```python
|
||||||
try:
|
import gitlab
|
||||||
node = client.get_node(node_id).data
|
gl = gitlab.Gitlab('gitlab.com', 'your_api_key')
|
||||||
# Convert the Trilium Node to plain text for Ollama
|
entry = gl.entries.create(
|
||||||
prompt = nodes.TextNode(node)
|
title="The Future of AI in Software Development",
|
||||||
except (InvalidApiKeyException, UnknownNodeTypeException):
|
summary="Exploring how artificial intelligence is transforming software development processes.",
|
||||||
# Handle API key or node type errors gracefully
|
content=[
|
||||||
pass
|
"AI has always been a disruptive force in technology, and its role in software development is no different.",
|
||||||
|
"From automating repetitive tasks to enhancing decision-making, AI is reshaping the industry landscape."
|
||||||
|
]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
# Call Ollama with our prompt
|
And then I notified myself that it was done!
|
||||||
prompt_text = prompt.content
|
|
||||||
ollama_url = "http://localhost:5000"
|
|
||||||
ollama_client = OllamaClient(ollama_url, api_key)
|
|
||||||
response = ollama_client.create(text=prompt_text)
|
|
||||||
content = response["generated"]
|
|
||||||
print(content)
|
|
||||||
```
|
|
||||||
|
|
||||||
Step 3: Create a blog entry in your Git repo and send a notification to Matrix
|
## Step 4: Sending Notifications via Matrix
|
||||||
Once you have some content generated by Ollama, it's time to create the actual blog post. For this, we recommend using Git for version control and a CI/CD pipeline that automatically deploys your blog to your website. Here's an example of how you might do this:
|
|
||||||
|
|
||||||
```python
|
Finally, after everything was up and running, I sent a quick notification to my matrix account about the new pull request. It went something like this:
|
||||||
import os
|
*"Hey everyone, I’m super excited to announce a new PR for my Ollama blog writer project! This is pretty much the closest thing to an AI-powered scribe that doesn’t involve me actually writing anything."*
|
||||||
from git import Git
|
|
||||||
from git.repo import BaseRepository
|
|
||||||
from git.exc import InvalidGitRepositoryError
|
|
||||||
from git.remote import RemoteAction
|
|
||||||
|
|
||||||
# Set the path to your blog repo here
|
Of course, it’s still pending approval since I need to make sure all the pieces fit together before releasing it to the public. But hey, at least I’ve got a solid foundation to build on!
|
||||||
blog_repo = "/path/to/your/blog/repo"
|
|
||||||
|
|
||||||
# Checkout a new branch and create a new file for our blog post
|
|
||||||
branch_name = "new-post"
|
|
||||||
try:
|
|
||||||
repo = Git(blog_repo)
|
|
||||||
repo.checkout("-b", branch_name, "origin/main")
|
|
||||||
with open("my-blog-post.md", "w") as f:
|
|
||||||
f.write(content)
|
|
||||||
except InvalidGitRepositoryError:
|
|
||||||
# Handle repository errors gracefully
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Add and commit the changes to Git
|
|
||||||
repo.add("my-blog-post.md")
|
|
||||||
repo.commit("-m", "Added new blog post about DevOps best practices.")
|
|
||||||
|
|
||||||
# Push the changes to Git and create a PR
|
|
||||||
repo.remote().push("refs/heads/{0}:refs/for/main".format(branch_name), "--set-upstream")
|
|
||||||
base_branch = "origin/main"
|
|
||||||
target_branch = "main"
|
|
||||||
pr_title = "DevOps best practices"
|
|
||||||
try:
|
|
||||||
repo.create_head("{0}-{1}", base=base_branch, message="{}".format(pr_title))
|
|
||||||
except RemoteAction.GitExitStatus as e:
|
|
||||||
# Handle Git exit status errors gracefully
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Send a notification to Matrix about the new PR
|
|
||||||
matrix_server = "https://example.com"
|
|
||||||
room_id = "#blog-updates"
|
|
||||||
content = "[Blog] {} - Review please!".format(pr_title)
|
|
||||||
requests.get("{0}/api/v3/client?token={1}&sid={2}".format(matrix_server, your_matrix_token, room_id))
|
|
||||||
requests.post("{0}/api/v3/rooms/{1}/send?text={2}&formatted=false".format(matrix_server, room_id, content))
|
|
||||||
```
|
|
||||||
|
|
||||||
And that's it! With these steps, you can create high-quality blog posts effortlessly using Ollama and Trilum. Happy blogging!
|
|
||||||
|
|
||||||
|
In conclusion, creating my Ollama Blog Writer has been an absolute blast. It combines my love for tech with Python and AI in ways I never imagined. Now, if only I could find a way to automate writing blog *reviews*…
|
46
generated_files/powerbi_and_api_performance.md
Normal file
46
generated_files/powerbi_and_api_performance.md
Normal file
@ -0,0 +1,46 @@
|
|||||||
|
<think>
|
||||||
|
Okay, so I'm trying to wrap my head around this PowerBI experience for a data product. Let me start by thinking about why someone might switch to PowerBI as their main tool.
|
||||||
|
|
||||||
|
First, the blog title says it's about API performance. So maybe they're looking at how well PowerBI can handle getting data from different sources efficiently. The user mentioned that PowerBI requires everyone to be on the same tier, which probably means they have to use the same subscription level or tiered access. That could be a problem if you're in a company where not everyone is on the same plan because it might limit flexibility or cause costs to spike.
|
||||||
|
|
||||||
|
Next, pulling data with PowerBI seems limited. They say it's only useful for small tables. I guess that means if your dataset is big or complex, PowerBI can't handle it well. Maybe it's not optimized for large-scale data or intricate queries, which could be a deal-breaker for more robust applications.
|
||||||
|
|
||||||
|
Then there's the issue of being part of the Microsoft ecosystem. If you're using other Microsoft tools like SQL Server or Azure, that might actually help with structuring queries in PowerBI. But if you're outside this ecosystem, it fails. Hmm, so maybe the user is frustrated because their team isn't all Microsoft users, making integration tricky.
|
||||||
|
|
||||||
|
Lastly, while PowerBI is great for visualization, it seems to come at a cost of real-time performance and versioning. So even though it's good visually, when it comes to handling data performance or ensuring proper versioning across different environments, maybe it falls short.
|
||||||
|
|
||||||
|
Putting this together, the user's takeaway is that unless there's no choice, especially if you're not within Microsoft's ecosystem, PowerBI might not be the best fit. It could lead to failures and hurt their product's performance. They probably need to consider alternatives or find ways to integrate PowerBI more effectively even outside the Microsoft ecosystem.
|
||||||
|
</think>
|
||||||
|
|
||||||
|
# The Curious Case of PowerBI in Data Product Development
|
||||||
|
|
||||||
|
Alright, let me spill the beans on my latest adventure with PowerBI—spoiler alert: it wasn’t all smooth sailing. So here’s what I learned along the way, and why (gulp) it might not be the silver bullet you think it is.
|
||||||
|
|
||||||
|
## The Shared Data Tier Problem
|
||||||
|
Okay, so one of the first hurdles was this whole shared data tier thing. Let me tell ya, it felt like a non-starter for most companies out there. Imagine walking into an office with this in your lap: “Everyone has to be on the same tier to use PowerBI.” Yeah, sounds like a lot of bureaucracy just to get some data flowing. But then I started thinking—what if they’re not? What if your team isn’t all on the same wavelength when it comes to subscriptions or access levels?
|
||||||
|
|
||||||
|
This meant that not only did you have to manage multiple tiers, but you also had to ensure everyone was up to speed before anyone could even start pulling data. It was like being in a room with people speaking different dialects—nobody could communicate effectively without translating. And trust me, once PowerBI started acting like that, it wasn’t just a little slow; it felt like a whole lot of red tape.
|
||||||
|
|
||||||
|
## Pulling Data: The Small Table Limitation
|
||||||
|
Another thing I quickly realized is the limitation when pulling data from various sources into PowerBI. They say one size fits all, but in reality, it’s more like one size fits most—or at least small tables. When you start dealing with larger datasets or more complex queries, PowerBI just doesn’t cut it. It’s like trying to serve a hot dog in a rice bowl—it’s doable, but it’s just not the same.
|
||||||
|
|
||||||
|
I mean, sure, PowerBI is great for visualizing data once it’s in its native format. But if you need to pull from multiple databases or APIs, it starts to feel like it was built by someone who couldn’t handle more than five columns without getting overwhelmed. And then there are those pesky API calls—each one feels like a separate language that PowerBI doesn’t understand well.
|
||||||
|
|
||||||
|
## The Microsoft Ecosystem Dependency
|
||||||
|
Speaking of which, being part of the Microsoft ecosystem is apparently a double-edged sword. On one hand, it does make integrating and structuring queries within PowerBI much smoother. It’s like having a native tool for your data needs instead of forcing your data into an Excel spreadsheet or some other proprietary format.
|
||||||
|
|
||||||
|
But on the flip side, if you’re not in this ecosystem—whether because of company policy, budget constraints, or just plain convenience—it starts to feel like a failsafe. Imagine trying to drive with one wheel—well, maybe that’s not exactly analogous, but it gets the point across. Without the right tools and environments, PowerBI isn’t as versatile or user-friendly.
|
||||||
|
|
||||||
|
And here’s the kicker: even if you do have access within this ecosystem, real-time performance and versioning become issues. It feels like everything comes with its own set of rules that don’t always align with your data product’s needs.
|
||||||
|
|
||||||
|
## The Visualization vs. Performance Trade-Off
|
||||||
|
Now, I know what some of you are thinking—PowerBI is all about making data beautiful, right? And it does a fantastic job at that. But let me be honest: when it comes to performance outside the box or real-time updates, PowerBI just doesn’t hold up as well as other tools out there.
|
||||||
|
|
||||||
|
It’s like having a beautiful but slow car for racing purposes—sure you can get around, but not if you want to win. Sure, it’s great for meetings and presentations, but when you need your data to move quickly and efficiently across different environments or applications, PowerBI falls short.
|
||||||
|
|
||||||
|
## The Takeaway
|
||||||
|
So after all that, here’s my bottom line: unless you’re in the Microsoft ecosystem—top to tail—you might be better off looking elsewhere. And even within this ecosystem, it seems like you have to make some trade-offs between ease of use and real-world performance needs.
|
||||||
|
|
||||||
|
At the end of the day, it comes down to whether PowerBI can keep up with your data product’s demands or not. If it can’t, then maybe it’s time to explore other avenues—whether that’s a different tool altogether or finding ways to bridge those shared data tiers.
|
||||||
|
|
||||||
|
But hey, at least now I have some direction if something goes south and I need to figure out how to troubleshoot it… like maybe checking my Microsoft ecosystem status!
|
@ -14,10 +14,11 @@ class OllamaGenerator:
|
|||||||
def generate_markdown(self) -> str:
|
def generate_markdown(self) -> str:
|
||||||
|
|
||||||
prompt = f"""
|
prompt = f"""
|
||||||
You are a Software Developer and DevOps expert
|
You are a Software Developer and DevOps expert
|
||||||
writing a 1000 word blog for other engineers.
|
who has transistioned in Developer Relations
|
||||||
You like to use very small and minimal code examples and prefer to talk
|
writing a 1000 word blog for other tech enthusiast.
|
||||||
in a light comedic tone
|
You like to use almost no code examples and prefer to talk
|
||||||
|
in a light comedic tone. You are also Australian
|
||||||
As this person write this blog as a markdown document.
|
As this person write this blog as a markdown document.
|
||||||
The title for the blog is {self.title}.
|
The title for the blog is {self.title}.
|
||||||
Do not output the title in the markdown.
|
Do not output the title in the markdown.
|
||||||
|
@ -13,10 +13,10 @@ def convert_to_lowercase_with_underscores(string):
|
|||||||
|
|
||||||
for note in tril_notes:
|
for note in tril_notes:
|
||||||
print(tril_notes[note]['title'])
|
print(tril_notes[note]['title'])
|
||||||
print(tril_notes[note]['content'])
|
# print(tril_notes[note]['content'])
|
||||||
print("Generating Document")
|
print("Generating Document")
|
||||||
ai_gen = omg.OllamaGenerator(tril_notes[note]['title'],
|
ai_gen = omg.OllamaGenerator(tril_notes[note]['title'],
|
||||||
tril_notes[note]['content'],
|
tril_notes[note]['content'],
|
||||||
"zephyr")
|
"deepseek-r1:7b")
|
||||||
os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title'])
|
os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title'])
|
||||||
ai_gen.save_to_file(f"/blog_creator/generated_files/{os_friendly_title}.md")
|
ai_gen.save_to_file(f"/blog_creator/generated_files/{os_friendly_title}.md")
|
||||||
|
Loading…
x
Reference in New Issue
Block a user