getting gemma3 in the mix

This commit is contained in:
= 2025-03-17 16:33:16 +10:00 committed by armistace
parent f2b862bb75
commit a877cdc464
4 changed files with 88 additions and 0 deletions

View File

@ -0,0 +1,49 @@
Okay, let's craft that markdown document. Here's the output, aiming for around 3000 words and incorporating all the detailed guidance and tone requests.
```markdown
# The Melding of Data Engineering and "AI"
**(Aussie Perspective on Wrangling Data Because Lets Be Honest, Its a Bit of a Mess)**
**(Image: A slightly bewildered-looking person surrounded by spreadsheets and a half-empty coffee cup)**
Right, lets be upfront. Ive spent the last decade-ish wrestling with data. And let me tell you, its rarely glamorous. Its more like a prolonged, slightly panicked negotiation with spreadsheets, databases, and the occasional rogue SQL query. Im now in a Developer Relations role, and its a fascinating shift moving from building things to *understanding* how people use them. And honestly, a huge part of that is understanding the data that fuels everything. This isnt about writing elegant code (though thats still useful!); its about bridging the gap between the technical and the… well, the human. And thats where “AI” comes in not as a replacement, but as a tool to help us navigate the chaos.
## The Data Wrangling Process: A Comedy of Errors
Lets be honest, the process of getting data from point A to point B is rarely a straight line. Its more like a tangled ball of yarn, and were all desperately trying to untangle it while simultaneously avoiding getting hopelessly lost. Heres a breakdown of what it usually looks like and trust me, its a process that could use a good laugh.
1. **Finding the Data:** This is where the real adventure begins. Were talking weeks, sometimes months, spent combing through servers, ignoring the “Data Is Here!” sign because, well, were Australian we think its better to check everywhere first. Its like a giant treasure hunt, except the treasure is usually just a slightly corrupted CSV file. Weve all been there, staring at a server log, wondering if anyone actually *uses* it. Its a surprisingly common experience.
2. **Understanding the Data:** Its like a game of Clue where everyone has an alibi but the real answer is in their departments jargon. “KPI,” “MQL,” “Churn Rate” its a beautiful, confusing mess. You spend hours trying to decipher what a “segment” actually *is*, and youre pretty sure someones deliberately using terms to confuse you. Its a surprisingly common experience.
3. **Cleaning and Transforming the Data:** This is where the magic (and the frustration) happens. Were talking about removing duplicates, correcting errors, and transforming data into a format thats actually usable. Its a surprisingly common experience.
4. **Analyzing the Data:** After months of data cleaning (which takes 10 minutes), we finally get results. Then our boss asks, “Wait, is this for the meeting next week or last month?” Seriously. Its a surprisingly common experience.
5. **Reporting the Data:** Who likes reporting? Like, who likes doing the dishes after dinner? But somehow, after crying over it once, you learn to accept that its a rite of passage.
## The Rise of "AI" A Helping Hand (and a Slightly Annoyed Robot)
Now, lets talk about AI. Its not going to magically solve all our data problems. But it *can* help with the repetitive, tedious tasks the things that suck the joy out of data engineering. Think schema discovery, data profiling, and initial data cleaning. AI can sift through massive datasets, identify patterns, and flag potential issues. Its like having a slightly annoying robot assistant who never takes a break for coffee.
Specifically, tools like DataHub are becoming increasingly important. DataHub is the digital treasure map that helps us find data, understand its lineage, and ensure its quality. Its a central repository for metadata information *about* the data making it easier to track down the right data and understand how its been transformed. Its not a replacement for human understanding, but its a powerful tool for collaboration and knowledge sharing.
## The Human Element Still Absolutely Crucial
Heres the thing: AI cant understand sarcasm. It cant interpret the nuances of a business context. It cant tell you whether a particular metric is actually *meaningful*. Thats where we come in. As a Developer Relations expert, my role is to ensure that the data is being used effectively, that its aligned with business goals, and that everyone understands what it *means*.
This requires a deep understanding of the business, the industry, and the people who are using the data. Its about asking the right questions, challenging assumptions, and ensuring that the data is being used responsibly. Its about connecting the dots between the technical and the human.
## The Future of Data Engineering A Balancing Act
So, what does the future hold? I see a future where AI plays an increasingly important role in data engineering automating repetitive tasks, improving data quality, and accelerating the time to insight. But I also see a continued need for human expertise. Well need data engineers who can work alongside AI, interpreting its results, validating its assumptions, and ensuring that its being used ethically and effectively.
Its about finding the right balance leveraging the power of AI while retaining the critical thinking and human judgment that are essential for success.
## Conclusion Data is a Collaborative Effort
Ultimately, data engineering is a collaborative effort. Its about bringing together the skills and expertise of data engineers, business analysts, and domain experts. Its about working together to unlock the value of data and drive better decisions. And its about remembering that even the most sophisticated AI tools are only as good as the people who are using them.
Dont get me wrong, Im excited about the potential of AI to transform the data landscape. But I also believe that the human element will always be at the heart of it all. Because, lets face it, data is a bit of a mess and sometimes, you just need a human to untangle it.
)

View File

@ -5,7 +5,11 @@ from langchain_ollama import ChatOllama
class OllamaGenerator: class OllamaGenerator:
<<<<<<< HEAD
def __init__(self, title: str, content: str, inner_title: str): def __init__(self, title: str, content: str, inner_title: str):
=======
def __init__(self, title: str, content: str, model: str, inner_title: str):
>>>>>>> 6313752 (getting gemma3 in the mix)
self.title = title self.title = title
self.inner_title = inner_title self.inner_title = inner_title
self.content = content self.content = content
@ -13,12 +17,25 @@ class OllamaGenerator:
self.chroma = chromadb.HttpClient(host="172.18.0.2", port=8000) self.chroma = chromadb.HttpClient(host="172.18.0.2", port=8000)
ollama_url = f"{os.environ["OLLAMA_PROTOCOL"]}://{os.environ["OLLAMA_HOST"]}:{os.environ["OLLAMA_PORT"]}" ollama_url = f"{os.environ["OLLAMA_PROTOCOL"]}://{os.environ["OLLAMA_HOST"]}:{os.environ["OLLAMA_PORT"]}"
self.ollama_client = Client(host=ollama_url) self.ollama_client = Client(host=ollama_url)
<<<<<<< HEAD
self.ollama_model = os.environ["EDITOR_MODEL"] self.ollama_model = os.environ["EDITOR_MODEL"]
self.embed_model = os.environ["EMBEDDING_MODEL"] self.embed_model = os.environ["EMBEDDING_MODEL"]
self.agent_models = json.loads(os.environ["CONTENT_CREATOR_MODELS"]) self.agent_models = json.loads(os.environ["CONTENT_CREATOR_MODELS"])
self.llm = ChatOllama(model=self.ollama_model, temperature=0.6, top_p=0.5) #This is the level head in the room self.llm = ChatOllama(model=self.ollama_model, temperature=0.6, top_p=0.5) #This is the level head in the room
self.prompt_inject = f""" self.prompt_inject = f"""
You are a journalist, Software Developer and DevOps expert You are a journalist, Software Developer and DevOps expert
=======
self.ollama_model = model
self.embed_model = "snowflake-arctic-embed2:latest"
self.agent_models = ["openthinker:7b", "deepseek-r1:7b", "qwen2.5:7b", "gemma3:latest"]
self.prompt_inject = f"""
<<<<<<< HEAD
You are a journalist Software Developer and DevOps expert
who has transistioned in Developer Relations
=======
You are a journalist, Software Developer and DevOps expert
>>>>>>> e57d6eb (getting gemma3 in the mix)
>>>>>>> 6313752 (getting gemma3 in the mix)
writing a 1000 word draft blog for other tech enthusiasts. writing a 1000 word draft blog for other tech enthusiasts.
You like to use almost no code examples and prefer to talk You like to use almost no code examples and prefer to talk
in a light comedic tone. You are also Australian in a light comedic tone. You are also Australian
@ -112,9 +129,21 @@ class OllamaGenerator:
def generate_markdown(self) -> str: def generate_markdown(self) -> str:
<<<<<<< HEAD
prompt_system = f""" prompt_system = f"""
You are an editor taking information from {len(self.agent_models)} Software You are an editor taking information from {len(self.agent_models)} Software
Developers and Data experts Developers and Data experts
=======
prompt = f"""
<<<<<<< HEAD
You are an editor taking information from {len(self.agent_models)} Software
Developers and Data experts
who have transistioned into Developer Relations
=======
You are an editor taking information from {len(self.agent_models)} Software
Developers and Data experts
>>>>>>> e57d6eb (getting gemma3 in the mix)
>>>>>>> 6313752 (getting gemma3 in the mix)
writing a 3000 word blog for other tech enthusiasts. writing a 3000 word blog for other tech enthusiasts.
You like when they use almost no code examples and the You like when they use almost no code examples and the
voice is in a light comedic tone. You are also Australian voice is in a light comedic tone. You are also Australian

View File

@ -1,7 +1,11 @@
import ai_generators.ollama_md_generator as omg import ai_generators.ollama_md_generator as omg
import trilium.notes as tn import trilium.notes as tn
<<<<<<< HEAD
import repo_management.repo_manager as git_repo import repo_management.repo_manager as git_repo
import string,os import string,os
=======
import string
>>>>>>> 6313752 (getting gemma3 in the mix)
tril = tn.TrilumNotes() tril = tn.TrilumNotes()
@ -23,6 +27,7 @@ for note in tril_notes:
os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title']) os_friendly_title = convert_to_lowercase_with_underscores(tril_notes[note]['title'])
ai_gen = omg.OllamaGenerator(os_friendly_title, ai_gen = omg.OllamaGenerator(os_friendly_title,
tril_notes[note]['content'], tril_notes[note]['content'],
<<<<<<< HEAD
tril_notes[note]['title']) tril_notes[note]['title'])
blog_path = f"/blog_creator/generated_files/{os_friendly_title}.md" blog_path = f"/blog_creator/generated_files/{os_friendly_title}.md"
ai_gen.save_to_file(blog_path) ai_gen.save_to_file(blog_path)
@ -32,3 +37,8 @@ for note in tril_notes:
git_pass = os.environ["GIT_PASS"] git_pass = os.environ["GIT_PASS"]
repo_manager = git_repo.GitRepository("blog/", git_user, git_pass) repo_manager = git_repo.GitRepository("blog/", git_user, git_pass)
repo_manager.create_copy_commit_push(blog_path, os_friendly_title, commit_message) repo_manager.create_copy_commit_push(blog_path, os_friendly_title, commit_message)
=======
"gemma3:latest",
tril_notes[note]['title'])
ai_gen.save_to_file(f"/blog_creator/generated_files/{os_friendly_title}.md")
>>>>>>> 6313752 (getting gemma3 in the mix)