finalised proxmox post
All checks were successful
Build and Push Image / Build and push image (push) Has been skipped

This commit is contained in:
Andrew Ridgway 2024-11-06 14:24:10 +10:00
parent 0b42b5f87a
commit d9adba7231
27 changed files with 1198 additions and 53 deletions

Binary file not shown.

View File

@ -7,15 +7,19 @@ Slug: proxmox-cluster-1
Authors: Andrew Ridgway
Summary: Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor
#### A quick summary of this post by
#### A quick summary of this post by AI
I'm going to use AI to summarise this post here because it ended up quite long
I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;)
**Summary:**
* You've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.
* You're using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.
* Your current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.
Quick look at some of the things I've used Proxmox fr
* I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.
* I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.
* My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.
As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future
**Follow-up Questions:**
@ -37,7 +41,11 @@ I'm going to use AI to summarise this post here because it ended up quite long
## A Picture is worth a thousand words
<INSERT PICTURE HERE OF FINAL PRODUCT>
<img alt="Proxmox Image" height="auto" width="100%" src="{attach}/images/proxmox.jpg">
_Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on_
<img alt="Proxmox Architecture" height="auto" width="100%" src="{attach}/images/Server_Initial_Architecture.png">
## The idea
For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!
@ -51,7 +59,7 @@ After doing some amazing reddit research and looking at various homelab ideas fo
First lets define what on earth Proxmox is
##### Proxmox
#### Proxmox
Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:
1. **Simultaneous Management of LXC Containers and VMs:**
@ -74,8 +82,11 @@ Proxmox VE (Virtual Environment) is an open-source server virtualization platfor
Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.
**Relevant Links:**
- Official Proxmox VE website: <https://www.proxmox.com/>
- Proxmox VE documentation: <https://pve-proxmox-community.org/>
- Proxmox VE forums: <https://forum.proxmox.com/>
I'd like to thank the mistral-nemo LLM for writing that ;)
@ -84,21 +95,29 @@ I'd like to thank the mistral-nemo LLM for writing that ;)
To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.
1. **Isolation Level**:
- LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.
- Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.
**Isolation Level**:
2. **System Call Filtering**:
- LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.
- Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.
- LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.
- Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.
3. **Resource Management**:
- LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.
- Docker enforces strict resource limits on every container by default.
**System Call Filtering**:
4. **Networking**:
- In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like `ip` or `bridge-utils`.
- Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.
- LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.
- Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.
**Resource Management**
- LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.
- Docker enforces strict resource limits on every container by default.
**Networking**:
- In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like `ip` or `bridge-utils`.
- Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.
What LXC is Focused On:
@ -114,7 +133,7 @@ Given these differences, here's what LXC primarily focuses on:
So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.
Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resourceo in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!
Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!
The LXC's have also been super easy to set up with the help of ttecks helper scripts [Proxmox Helper Scripts](https://community-scripts.github.io/Proxmox/) It was very sad to hear he had gotten [sick](https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/) and I realy hope he gets well soon!

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

View File

@ -48,7 +48,7 @@
<meta property="og:type" content="article">
<meta property="article:author" content="">
<meta property="og:url" content="http://localhost:8000/appflow-production.html">
<meta property="og:title" content="Implmenting Appflow in a Production Datalake">
<meta property="og:title" content="Implementing Appflow in a Production Datalake">
<meta property="og:description" content="">
<meta property="og:image" content="http://localhost:8000/">
<meta property="article:published_time" content="2023-05-23 20:00:00+10:00">
@ -87,7 +87,7 @@
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-heading">
<h1>Implmenting Appflow in a Production Datalake</h1>
<h1>Implementing Appflow in a Production Datalake</h1>
<span class="meta">Posted by
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
on Tue 23 May 2023

View File

@ -82,6 +82,8 @@
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<dl>
<dt>Wed 24 July 2024</dt>
<dd><a href="http://localhost:8000/proxmox-cluster-1.html">Building a 5 node Proxmox cluster!</a></dd>
<dt>Fri 23 February 2024</dt>
<dd><a href="http://localhost:8000/cover-letter.html">A Cover Letter</a></dd>
<dt>Fri 23 February 2024</dt>
@ -89,7 +91,7 @@
<dt>Wed 15 November 2023</dt>
<dd><a href="http://localhost:8000/metabase-duckdb.html">Metabase and DuckDB</a></dd>
<dt>Tue 23 May 2023</dt>
<dd><a href="http://localhost:8000/appflow-production.html">Implmenting Appflow in a Production Datalake</a></dd>
<dd><a href="http://localhost:8000/appflow-production.html">Implementing Appflow in a Production Datalake</a></dd>
<dt>Wed 10 May 2023</dt>
<dd><a href="http://localhost:8000/how-i-built-the-damn-thing.html">Dawn of another blog attempt</a></dd>
<hr>

View File

@ -81,6 +81,19 @@
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-preview">
<a href="http://localhost:8000/proxmox-cluster-1.html" rel="bookmark" title="Permalink to Building a 5 node Proxmox cluster!">
<h2 class="post-title">
Building a 5 node Proxmox cluster!
</h2>
</a>
<p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p>
<p class="post-meta">Posted by
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
on Wed 24 July 2024
</p>
</div>
<hr>
<div class="post-preview">
<a href="http://localhost:8000/cover-letter.html" rel="bookmark" title="Permalink to A Cover Letter">
<h2 class="post-title">
@ -121,9 +134,9 @@
</div>
<hr>
<div class="post-preview">
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implmenting Appflow in a Production Datalake">
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implementing Appflow in a Production Datalake">
<h2 class="post-title">
Implmenting Appflow in a Production Datalake
Implementing Appflow in a Production Datalake
</h2>
</a>
<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>

View File

@ -84,7 +84,7 @@
<div class="post-preview">
<a href="http://localhost:8000/author/andrew-ridgway.html" rel="bookmark">
<h2 class="post-title">
Andrew Ridgway (5)
Andrew Ridgway (6)
</h2>
</a>
</div>

View File

@ -85,6 +85,7 @@
<li><a href="http://localhost:8000/category/business-intelligence.html">Business Intelligence</a></li>
<li><a href="http://localhost:8000/category/data-engineering.html">Data Engineering</a></li>
<li><a href="http://localhost:8000/category/resume.html">Resume</a></li>
<li><a href="http://localhost:8000/category/server-architecture.html">Server Architecture</a></li>
</ul>
</div>
</div>

View File

@ -83,9 +83,9 @@
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-preview">
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implmenting Appflow in a Production Datalake">
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implementing Appflow in a Production Datalake">
<h2 class="post-title">
Implmenting Appflow in a Production Datalake
Implementing Appflow in a Production Datalake
</h2>
</a>
<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>

View File

@ -0,0 +1,165 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Andrew Ridgway's Blog - Articles in the Server Architecture category</title>
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
<link href="http://localhost:8000/feeds/server-architecture.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
<!-- Bootstrap Core CSS -->
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom CSS -->
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
<!-- Code highlight color scheme -->
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
<meta property="og:locale" content="en">
<meta property="og:site_name" content="Andrew Ridgway's Blog">
</head>
<body>
<!-- Navigation -->
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header page-scroll">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container -->
</nav>
<!-- Page Header -->
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-heading">
<h1>Articles in the Server Architecture category</h1>
</div>
</div>
</div>
</div>
</header>
<!-- Main Content -->
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-preview">
<a href="http://localhost:8000/proxmox-cluster-1.html" rel="bookmark" title="Permalink to Building a 5 node Proxmox cluster!">
<h2 class="post-title">
Building a 5 node Proxmox cluster!
</h2>
</a>
<p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p>
<p class="post-meta">Posted by
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
on Wed 24 July 2024
</p>
</div>
<hr>
<!-- Pager -->
<ul class="pager">
<li class="next">
</li>
</ul>
Page 1 / 1
<hr>
</div>
</div>
</div>
<hr>
<!-- Footer -->
<footer>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<p>
<script type="text/javascript" src="https://sessionize.com/api/speaker/sessions/83c5d14a-bd19-46b4-8335-0ac8358ac46d/0x0x91929ax">
</script>
</p>
<ul class="list-inline text-center">
<li>
<a href="https://twitter.com/ar17787">
<span class="fa-stack fa-lg">
<i class="fa fa-circle fa-stack-2x"></i>
<i class="fa fa-twitter fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
<li>
<a href="https://facebook.com/ar17787">
<span class="fa-stack fa-lg">
<i class="fa fa-circle fa-stack-2x"></i>
<i class="fa fa-facebook fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
<li>
<a href="https://github.com/armistace">
<span class="fa-stack fa-lg">
<i class="fa fa-circle fa-stack-2x"></i>
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
</ul>
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
which takes great advantage of <a href="http://python.org">Python</a>.</p>
</div>
</div>
</div>
</footer>
<!-- jQuery -->
<script src="http://localhost:8000/theme/js/jquery.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
<!-- Custom Theme JavaScript -->
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
</body>
</html>

View File

@ -110,9 +110,9 @@
<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p>
<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldnt normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isnt considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p>
<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p>
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:
(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator]
(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]</p>
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:</p>
<p><a href="https://github.com/armistace/gpt-sql-generator">gpt-sql-generator</a></p>
<p><a href="[https://github.com/armistace/datahub_dbt_sources_generator">dbt_sources_generator</a></p>
<p>I look forward to hearing from you soon.</p>
<p>Sincerely,</p>
<hr>

View File

@ -1,13 +1,164 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/all-en.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-03-13T20:00:00+10:00</updated><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html">&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</summary><content type="html">&lt;p&gt;To whom it may concern&lt;/p&gt;
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/all-en.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html">&lt;p&gt;Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor&lt;/p&gt;</summary><content type="html">&lt;h4&gt;A quick summary of this post by AI&lt;/h4&gt;
&lt;p&gt;I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Quick look at some of the things I've used Proxmox fr&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.&lt;/li&gt;
&lt;li&gt;I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.&lt;/li&gt;
&lt;li&gt;My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Follow-up Questions:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes Cluster:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD with Gitea:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How do you ensure the security and privacy of your data while utilizing these tools?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Future Plans:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?&lt;/li&gt;
&lt;li&gt;Are there any other new technologies or projects you're considering for your homelab in the near future?&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;A Picture is worth a thousand words&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"&gt;&lt;/p&gt;
&lt;h2&gt;The idea&lt;/h2&gt;
&lt;p&gt;For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!&lt;/p&gt;
&lt;p&gt;I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.&lt;/p&gt;
&lt;p&gt;All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)&lt;/p&gt;
&lt;h3&gt;The platform for the Idea!&lt;/h3&gt;
&lt;p&gt;After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.&lt;/p&gt;
&lt;p&gt;First lets define what on earth Proxmox is&lt;/p&gt;
&lt;h4&gt;Proxmox&lt;/h4&gt;
&lt;p&gt;Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Simultaneous Management of LXC Containers and VMs:&lt;/strong&gt;
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Efficient Resource Allocation:&lt;/strong&gt;
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Live Migration:&lt;/strong&gt;
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt;
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open-Source and Free:&lt;/strong&gt;
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Relevant Links:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Official Proxmox VE website: &lt;a href="https://www.proxmox.com/"&gt;https://www.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE documentation: &lt;a href="https://pve-proxmox-community.org/"&gt;https://pve-proxmox-community.org/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE forums: &lt;a href="https://forum.proxmox.com/"&gt;https://forum.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I'd like to thank the mistral-nemo LLM for writing that ;) &lt;/p&gt;
&lt;h3&gt;LXC's&lt;/h3&gt;
&lt;p&gt;To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Isolation Level&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;System Call Filtering&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker enforces strict resource limits on every container by default.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like &lt;code&gt;ip&lt;/code&gt; or &lt;code&gt;bridge-utils&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What LXC is Focused On:&lt;/p&gt;
&lt;p&gt;Given these differences, here's what LXC primarily focuses on:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simplicity and Lightweightness&lt;/strong&gt;: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control and Flexibility&lt;/strong&gt;: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integration with Traditional Linux Tools&lt;/strong&gt;: Since LXC uses standard Linux tools for networking (like &lt;code&gt;ip&lt;/code&gt; and &lt;code&gt;bridge-utils&lt;/code&gt;) and does not add its own layer, it integrates well with traditional Linux systems administration practices.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Cases Where Fine-grained Control is Required&lt;/strong&gt;: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.&lt;/p&gt;
&lt;p&gt;Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!&lt;/p&gt;
&lt;p&gt;The LXC's have also been super easy to set up with the help of ttecks helper scripts &lt;a href="https://community-scripts.github.io/Proxmox/"&gt;Proxmox Helper Scripts&lt;/a&gt; It was very sad to hear he had gotten &lt;a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/"&gt;sick&lt;/a&gt; and I realy hope he gets well soon!&lt;/p&gt;
&lt;h3&gt;VM's&lt;/h3&gt;
&lt;p&gt;Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!&lt;/p&gt;
&lt;p&gt;Again I'd like to thank mistral-nemo for that very informative piece of prose ;) &lt;/p&gt;
&lt;p&gt;The big question here is what do I use the VM capablity of Proxmox for?&lt;/p&gt;
&lt;p&gt;I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)&lt;/p&gt;
&lt;p&gt;I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system&lt;/p&gt;
&lt;p&gt;Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. &lt;/p&gt;
&lt;p&gt;Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)&lt;/p&gt;</content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html">&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</summary><content type="html">&lt;p&gt;To whom it may concern&lt;/p&gt;
&lt;p&gt;My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.&lt;/p&gt;
&lt;p&gt;I have over 10 years experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. &lt;/p&gt;
&lt;p&gt;In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. &lt;/p&gt;
&lt;p&gt;I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldnt normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isnt considered production ready until it is in IAC and deployed through a CI/CD pipeline.&lt;/p&gt;
&lt;p&gt;In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. &lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:
(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator]
(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]&lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/armistace/gpt-sql-generator"&gt;gpt-sql-generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="[https://github.com/armistace/datahub_dbt_sources_generator"&gt;dbt_sources_generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I look forward to hearing from you soon.&lt;/p&gt;
&lt;p&gt;Sincerely,&lt;/p&gt;
&lt;hr&gt;
@ -213,7 +364,7 @@ development and applying it in a strategic context.&lt;/p&gt;
&lt;p&gt;And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.&lt;/p&gt;
&lt;p&gt;Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the &lt;a href="https://www.metabase.com/learn/administration/git-based-workflow"&gt;metabase documentation&lt;/a&gt;, the unfortunate thing about it is Metabase &lt;em&gt;have&lt;/em&gt; hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.&lt;/p&gt;
&lt;p&gt;Until then....&lt;/p&gt;</content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implmenting Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;Until then....&lt;/p&gt;</content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. &lt;/p&gt;
&lt;p&gt;Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.&lt;/p&gt;
&lt;h3&gt;Datalake Extraction Layer&lt;/h3&gt;

View File

@ -1,13 +1,164 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/all.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-03-13T20:00:00+10:00</updated><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html">&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</summary><content type="html">&lt;p&gt;To whom it may concern&lt;/p&gt;
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/all.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html">&lt;p&gt;Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor&lt;/p&gt;</summary><content type="html">&lt;h4&gt;A quick summary of this post by AI&lt;/h4&gt;
&lt;p&gt;I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Quick look at some of the things I've used Proxmox fr&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.&lt;/li&gt;
&lt;li&gt;I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.&lt;/li&gt;
&lt;li&gt;My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Follow-up Questions:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes Cluster:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD with Gitea:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How do you ensure the security and privacy of your data while utilizing these tools?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Future Plans:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?&lt;/li&gt;
&lt;li&gt;Are there any other new technologies or projects you're considering for your homelab in the near future?&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;A Picture is worth a thousand words&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"&gt;&lt;/p&gt;
&lt;h2&gt;The idea&lt;/h2&gt;
&lt;p&gt;For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!&lt;/p&gt;
&lt;p&gt;I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.&lt;/p&gt;
&lt;p&gt;All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)&lt;/p&gt;
&lt;h3&gt;The platform for the Idea!&lt;/h3&gt;
&lt;p&gt;After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.&lt;/p&gt;
&lt;p&gt;First lets define what on earth Proxmox is&lt;/p&gt;
&lt;h4&gt;Proxmox&lt;/h4&gt;
&lt;p&gt;Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Simultaneous Management of LXC Containers and VMs:&lt;/strong&gt;
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Efficient Resource Allocation:&lt;/strong&gt;
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Live Migration:&lt;/strong&gt;
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt;
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open-Source and Free:&lt;/strong&gt;
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Relevant Links:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Official Proxmox VE website: &lt;a href="https://www.proxmox.com/"&gt;https://www.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE documentation: &lt;a href="https://pve-proxmox-community.org/"&gt;https://pve-proxmox-community.org/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE forums: &lt;a href="https://forum.proxmox.com/"&gt;https://forum.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I'd like to thank the mistral-nemo LLM for writing that ;) &lt;/p&gt;
&lt;h3&gt;LXC's&lt;/h3&gt;
&lt;p&gt;To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Isolation Level&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;System Call Filtering&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker enforces strict resource limits on every container by default.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like &lt;code&gt;ip&lt;/code&gt; or &lt;code&gt;bridge-utils&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What LXC is Focused On:&lt;/p&gt;
&lt;p&gt;Given these differences, here's what LXC primarily focuses on:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simplicity and Lightweightness&lt;/strong&gt;: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control and Flexibility&lt;/strong&gt;: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integration with Traditional Linux Tools&lt;/strong&gt;: Since LXC uses standard Linux tools for networking (like &lt;code&gt;ip&lt;/code&gt; and &lt;code&gt;bridge-utils&lt;/code&gt;) and does not add its own layer, it integrates well with traditional Linux systems administration practices.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Cases Where Fine-grained Control is Required&lt;/strong&gt;: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.&lt;/p&gt;
&lt;p&gt;Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!&lt;/p&gt;
&lt;p&gt;The LXC's have also been super easy to set up with the help of ttecks helper scripts &lt;a href="https://community-scripts.github.io/Proxmox/"&gt;Proxmox Helper Scripts&lt;/a&gt; It was very sad to hear he had gotten &lt;a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/"&gt;sick&lt;/a&gt; and I realy hope he gets well soon!&lt;/p&gt;
&lt;h3&gt;VM's&lt;/h3&gt;
&lt;p&gt;Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!&lt;/p&gt;
&lt;p&gt;Again I'd like to thank mistral-nemo for that very informative piece of prose ;) &lt;/p&gt;
&lt;p&gt;The big question here is what do I use the VM capablity of Proxmox for?&lt;/p&gt;
&lt;p&gt;I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)&lt;/p&gt;
&lt;p&gt;I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system&lt;/p&gt;
&lt;p&gt;Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. &lt;/p&gt;
&lt;p&gt;Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)&lt;/p&gt;</content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html">&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</summary><content type="html">&lt;p&gt;To whom it may concern&lt;/p&gt;
&lt;p&gt;My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.&lt;/p&gt;
&lt;p&gt;I have over 10 years experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. &lt;/p&gt;
&lt;p&gt;In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. &lt;/p&gt;
&lt;p&gt;I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldnt normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isnt considered production ready until it is in IAC and deployed through a CI/CD pipeline.&lt;/p&gt;
&lt;p&gt;In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. &lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:
(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator]
(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]&lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/armistace/gpt-sql-generator"&gt;gpt-sql-generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="[https://github.com/armistace/datahub_dbt_sources_generator"&gt;dbt_sources_generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I look forward to hearing from you soon.&lt;/p&gt;
&lt;p&gt;Sincerely,&lt;/p&gt;
&lt;hr&gt;
@ -213,7 +364,7 @@ development and applying it in a strategic context.&lt;/p&gt;
&lt;p&gt;And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.&lt;/p&gt;
&lt;p&gt;Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the &lt;a href="https://www.metabase.com/learn/administration/git-based-workflow"&gt;metabase documentation&lt;/a&gt;, the unfortunate thing about it is Metabase &lt;em&gt;have&lt;/em&gt; hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.&lt;/p&gt;
&lt;p&gt;Until then....&lt;/p&gt;</content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implmenting Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;Until then....&lt;/p&gt;</content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. &lt;/p&gt;
&lt;p&gt;Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.&lt;/p&gt;
&lt;h3&gt;Datalake Extraction Layer&lt;/h3&gt;

View File

@ -1,13 +1,164 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Andrew Ridgway</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/andrew-ridgway.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-03-13T20:00:00+10:00</updated><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html">&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</summary><content type="html">&lt;p&gt;To whom it may concern&lt;/p&gt;
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Andrew Ridgway</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/andrew-ridgway.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html">&lt;p&gt;Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor&lt;/p&gt;</summary><content type="html">&lt;h4&gt;A quick summary of this post by AI&lt;/h4&gt;
&lt;p&gt;I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Quick look at some of the things I've used Proxmox fr&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.&lt;/li&gt;
&lt;li&gt;I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.&lt;/li&gt;
&lt;li&gt;My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Follow-up Questions:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes Cluster:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD with Gitea:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How do you ensure the security and privacy of your data while utilizing these tools?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Future Plans:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?&lt;/li&gt;
&lt;li&gt;Are there any other new technologies or projects you're considering for your homelab in the near future?&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;A Picture is worth a thousand words&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"&gt;&lt;/p&gt;
&lt;h2&gt;The idea&lt;/h2&gt;
&lt;p&gt;For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!&lt;/p&gt;
&lt;p&gt;I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.&lt;/p&gt;
&lt;p&gt;All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)&lt;/p&gt;
&lt;h3&gt;The platform for the Idea!&lt;/h3&gt;
&lt;p&gt;After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.&lt;/p&gt;
&lt;p&gt;First lets define what on earth Proxmox is&lt;/p&gt;
&lt;h4&gt;Proxmox&lt;/h4&gt;
&lt;p&gt;Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Simultaneous Management of LXC Containers and VMs:&lt;/strong&gt;
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Efficient Resource Allocation:&lt;/strong&gt;
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Live Migration:&lt;/strong&gt;
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt;
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open-Source and Free:&lt;/strong&gt;
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Relevant Links:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Official Proxmox VE website: &lt;a href="https://www.proxmox.com/"&gt;https://www.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE documentation: &lt;a href="https://pve-proxmox-community.org/"&gt;https://pve-proxmox-community.org/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE forums: &lt;a href="https://forum.proxmox.com/"&gt;https://forum.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I'd like to thank the mistral-nemo LLM for writing that ;) &lt;/p&gt;
&lt;h3&gt;LXC's&lt;/h3&gt;
&lt;p&gt;To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Isolation Level&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;System Call Filtering&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker enforces strict resource limits on every container by default.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like &lt;code&gt;ip&lt;/code&gt; or &lt;code&gt;bridge-utils&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What LXC is Focused On:&lt;/p&gt;
&lt;p&gt;Given these differences, here's what LXC primarily focuses on:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simplicity and Lightweightness&lt;/strong&gt;: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control and Flexibility&lt;/strong&gt;: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integration with Traditional Linux Tools&lt;/strong&gt;: Since LXC uses standard Linux tools for networking (like &lt;code&gt;ip&lt;/code&gt; and &lt;code&gt;bridge-utils&lt;/code&gt;) and does not add its own layer, it integrates well with traditional Linux systems administration practices.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Cases Where Fine-grained Control is Required&lt;/strong&gt;: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.&lt;/p&gt;
&lt;p&gt;Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!&lt;/p&gt;
&lt;p&gt;The LXC's have also been super easy to set up with the help of ttecks helper scripts &lt;a href="https://community-scripts.github.io/Proxmox/"&gt;Proxmox Helper Scripts&lt;/a&gt; It was very sad to hear he had gotten &lt;a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/"&gt;sick&lt;/a&gt; and I realy hope he gets well soon!&lt;/p&gt;
&lt;h3&gt;VM's&lt;/h3&gt;
&lt;p&gt;Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!&lt;/p&gt;
&lt;p&gt;Again I'd like to thank mistral-nemo for that very informative piece of prose ;) &lt;/p&gt;
&lt;p&gt;The big question here is what do I use the VM capablity of Proxmox for?&lt;/p&gt;
&lt;p&gt;I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)&lt;/p&gt;
&lt;p&gt;I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system&lt;/p&gt;
&lt;p&gt;Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. &lt;/p&gt;
&lt;p&gt;Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)&lt;/p&gt;</content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html">&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</summary><content type="html">&lt;p&gt;To whom it may concern&lt;/p&gt;
&lt;p&gt;My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.&lt;/p&gt;
&lt;p&gt;I have over 10 years experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. &lt;/p&gt;
&lt;p&gt;In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. &lt;/p&gt;
&lt;p&gt;I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldnt normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isnt considered production ready until it is in IAC and deployed through a CI/CD pipeline.&lt;/p&gt;
&lt;p&gt;In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. &lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:
(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator]
(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]&lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/armistace/gpt-sql-generator"&gt;gpt-sql-generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="[https://github.com/armistace/datahub_dbt_sources_generator"&gt;dbt_sources_generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I look forward to hearing from you soon.&lt;/p&gt;
&lt;p&gt;Sincerely,&lt;/p&gt;
&lt;hr&gt;
@ -213,7 +364,7 @@ development and applying it in a strategic context.&lt;/p&gt;
&lt;p&gt;And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.&lt;/p&gt;
&lt;p&gt;Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the &lt;a href="https://www.metabase.com/learn/administration/git-based-workflow"&gt;metabase documentation&lt;/a&gt;, the unfortunate thing about it is Metabase &lt;em&gt;have&lt;/em&gt; hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.&lt;/p&gt;
&lt;p&gt;Until then....&lt;/p&gt;</content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implmenting Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;Until then....&lt;/p&gt;</content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. &lt;/p&gt;
&lt;p&gt;Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.&lt;/p&gt;
&lt;h3&gt;Datalake Extraction Layer&lt;/h3&gt;

View File

@ -1,2 +1,2 @@
<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"><channel><title>Andrew Ridgway's Blog - Andrew Ridgway</title><link>http://localhost:8000/</link><description></description><lastBuildDate>Wed, 13 Mar 2024 20:00:00 +1000</lastBuildDate><item><title>A Cover Letter</title><link>http://localhost:8000/cover-letter.html</link><description>&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Fri, 23 Feb 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-02-23:/cover-letter.html</guid><category>Resume</category><category>Cover Letter</category><category>Resume</category></item><item><title>A Resume</title><link>http://localhost:8000/resume.html</link><description>&lt;p&gt;A Summary of My work Experience&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Fri, 23 Feb 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-02-23:/resume.html</guid><category>Resume</category><category>Cover Letter</category><category>Resume</category></item><item><title>Metabase and DuckDB</title><link>http://localhost:8000/metabase-duckdb.html</link><description>&lt;p&gt;Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 15 Nov 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-11-15:/metabase-duckdb.html</guid><category>Business Intelligence</category><category>data engineering</category><category>Metabase</category><category>DuckDB</category><category>embedded</category></item><item><title>Implmenting Appflow in a Production Datalake</title><link>http://localhost:8000/appflow-production.html</link><description>&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Tue, 23 May 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-05-23:/appflow-production.html</guid><category>Data Engineering</category><category>data engineering</category><category>Amazon</category><category>Managed Services</category></item><item><title>Dawn of another blog attempt</title><link>http://localhost:8000/how-i-built-the-damn-thing.html</link><description>&lt;p&gt;Containers and How I take my learnings from home and apply them to work&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 10 May 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</guid><category>Data Engineering</category><category>data engineering</category><category>containers</category></item></channel></rss>
<rss version="2.0"><channel><title>Andrew Ridgway's Blog - Andrew Ridgway</title><link>http://localhost:8000/</link><description></description><lastBuildDate>Wed, 24 Jul 2024 20:00:00 +1000</lastBuildDate><item><title>Building a 5 node Proxmox cluster!</title><link>http://localhost:8000/proxmox-cluster-1.html</link><description>&lt;p&gt;Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 24 Jul 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-07-24:/proxmox-cluster-1.html</guid><category>Server Architecture</category><category>proxmox</category><category>kubernetes</category><category>hardware</category></item><item><title>A Cover Letter</title><link>http://localhost:8000/cover-letter.html</link><description>&lt;p&gt;A Summary of what I've done and Where I'd like to go for prospective Employers&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Fri, 23 Feb 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-02-23:/cover-letter.html</guid><category>Resume</category><category>Cover Letter</category><category>Resume</category></item><item><title>A Resume</title><link>http://localhost:8000/resume.html</link><description>&lt;p&gt;A Summary of My work Experience&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Fri, 23 Feb 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-02-23:/resume.html</guid><category>Resume</category><category>Cover Letter</category><category>Resume</category></item><item><title>Metabase and DuckDB</title><link>http://localhost:8000/metabase-duckdb.html</link><description>&lt;p&gt;Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 15 Nov 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-11-15:/metabase-duckdb.html</guid><category>Business Intelligence</category><category>data engineering</category><category>Metabase</category><category>DuckDB</category><category>embedded</category></item><item><title>Implementing Appflow in a Production Datalake</title><link>http://localhost:8000/appflow-production.html</link><description>&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Tue, 23 May 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-05-23:/appflow-production.html</guid><category>Data Engineering</category><category>data engineering</category><category>Amazon</category><category>Managed Services</category></item><item><title>Dawn of another blog attempt</title><link>http://localhost:8000/how-i-built-the-damn-thing.html</link><description>&lt;p&gt;Containers and How I take my learnings from home and apply them to work&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 10 May 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</guid><category>Data Engineering</category><category>data engineering</category><category>containers</category></item></channel></rss>

View File

@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Data Engineering</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/data-engineering.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2023-05-23T20:00:00+10:00</updated><entry><title>Implmenting Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Data Engineering</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/data-engineering.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2023-05-23T20:00:00+10:00</updated><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html">&lt;p&gt;How Appflow simplified a major extract layer and when I choose Managed Services&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called &lt;a href="https://aws.amazon.com/appflow/"&gt;Amazon Appflow&lt;/a&gt;. This product &lt;em&gt;claimed&lt;/em&gt; to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.&lt;/p&gt;
&lt;p&gt;This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. &lt;/p&gt;
&lt;p&gt;Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.&lt;/p&gt;
&lt;h3&gt;Datalake Extraction Layer&lt;/h3&gt;

View File

@ -5,9 +5,9 @@
&lt;p&gt;In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. &lt;/p&gt;
&lt;p&gt;I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldnt normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isnt considered production ready until it is in IAC and deployed through a CI/CD pipeline.&lt;/p&gt;
&lt;p&gt;In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. &lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:
(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator]
(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]&lt;/p&gt;
&lt;p&gt;Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT APIs to try and lessen that burden. You can see some of this work in its very early stages here:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/armistace/gpt-sql-generator"&gt;gpt-sql-generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="[https://github.com/armistace/datahub_dbt_sources_generator"&gt;dbt_sources_generator&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I look forward to hearing from you soon.&lt;/p&gt;
&lt;p&gt;Sincerely,&lt;/p&gt;
&lt;hr&gt;

View File

@ -0,0 +1,153 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Server Architecture</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/server-architecture.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html">&lt;p&gt;Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor&lt;/p&gt;</summary><content type="html">&lt;h4&gt;A quick summary of this post by AI&lt;/h4&gt;
&lt;p&gt;I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Quick look at some of the things I've used Proxmox fr&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.&lt;/li&gt;
&lt;li&gt;I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.&lt;/li&gt;
&lt;li&gt;My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Follow-up Questions:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes Cluster:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD with Gitea:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;How do you ensure the security and privacy of your data while utilizing these tools?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Future Plans:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?&lt;/li&gt;
&lt;li&gt;Are there any other new technologies or projects you're considering for your homelab in the near future?&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;A Picture is worth a thousand words&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"&gt;&lt;/p&gt;
&lt;h2&gt;The idea&lt;/h2&gt;
&lt;p&gt;For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!&lt;/p&gt;
&lt;p&gt;I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.&lt;/p&gt;
&lt;p&gt;All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)&lt;/p&gt;
&lt;h3&gt;The platform for the Idea!&lt;/h3&gt;
&lt;p&gt;After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.&lt;/p&gt;
&lt;p&gt;First lets define what on earth Proxmox is&lt;/p&gt;
&lt;h4&gt;Proxmox&lt;/h4&gt;
&lt;p&gt;Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Simultaneous Management of LXC Containers and VMs:&lt;/strong&gt;
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Efficient Resource Allocation:&lt;/strong&gt;
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Live Migration:&lt;/strong&gt;
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;High Availability:&lt;/strong&gt;
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open-Source and Free:&lt;/strong&gt;
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Relevant Links:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Official Proxmox VE website: &lt;a href="https://www.proxmox.com/"&gt;https://www.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE documentation: &lt;a href="https://pve-proxmox-community.org/"&gt;https://pve-proxmox-community.org/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Proxmox VE forums: &lt;a href="https://forum.proxmox.com/"&gt;https://forum.proxmox.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I'd like to thank the mistral-nemo LLM for writing that ;) &lt;/p&gt;
&lt;h3&gt;LXC's&lt;/h3&gt;
&lt;p&gt;To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Isolation Level&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;System Call Filtering&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker enforces strict resource limits on every container by default.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like &lt;code&gt;ip&lt;/code&gt; or &lt;code&gt;bridge-utils&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What LXC is Focused On:&lt;/p&gt;
&lt;p&gt;Given these differences, here's what LXC primarily focuses on:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simplicity and Lightweightness&lt;/strong&gt;: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control and Flexibility&lt;/strong&gt;: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integration with Traditional Linux Tools&lt;/strong&gt;: Since LXC uses standard Linux tools for networking (like &lt;code&gt;ip&lt;/code&gt; and &lt;code&gt;bridge-utils&lt;/code&gt;) and does not add its own layer, it integrates well with traditional Linux systems administration practices.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Cases Where Fine-grained Control is Required&lt;/strong&gt;: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.&lt;/p&gt;
&lt;p&gt;Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!&lt;/p&gt;
&lt;p&gt;The LXC's have also been super easy to set up with the help of ttecks helper scripts &lt;a href="https://community-scripts.github.io/Proxmox/"&gt;Proxmox Helper Scripts&lt;/a&gt; It was very sad to hear he had gotten &lt;a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/"&gt;sick&lt;/a&gt; and I realy hope he gets well soon!&lt;/p&gt;
&lt;h3&gt;VM's&lt;/h3&gt;
&lt;p&gt;Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!&lt;/p&gt;
&lt;p&gt;Again I'd like to thank mistral-nemo for that very informative piece of prose ;) &lt;/p&gt;
&lt;p&gt;The big question here is what do I use the VM capablity of Proxmox for?&lt;/p&gt;
&lt;p&gt;I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)&lt;/p&gt;
&lt;p&gt;I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system&lt;/p&gt;
&lt;p&gt;Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. &lt;/p&gt;
&lt;p&gt;Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)&lt;/p&gt;</content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry></feed>

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

View File

@ -84,6 +84,19 @@
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-preview">
<a href="http://localhost:8000/proxmox-cluster-1.html" rel="bookmark" title="Permalink to Building a 5 node Proxmox cluster!">
<h2 class="post-title">
Building a 5 node Proxmox cluster!
</h2>
</a>
<p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p>
<p class="post-meta">Posted by
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
on Wed 24 July 2024
</p>
</div>
<hr>
<div class="post-preview">
<a href="http://localhost:8000/cover-letter.html" rel="bookmark" title="Permalink to A Cover Letter">
<h2 class="post-title">
@ -124,9 +137,9 @@
</div>
<hr>
<div class="post-preview">
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implmenting Appflow in a Production Datalake">
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implementing Appflow in a Production Datalake">
<h2 class="post-title">
Implmenting Appflow in a Production Datalake
Implementing Appflow in a Production Datalake
</h2>
</a>
<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>

View File

@ -0,0 +1,323 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Andrew Ridgway's Blog</title>
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
<link href="http://localhost:8000/feeds/server-architecture.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
<!-- Bootstrap Core CSS -->
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom CSS -->
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
<!-- Code highlight color scheme -->
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
<meta name="tags" contents="proxmox" />
<meta name="tags" contents="kubernetes" />
<meta name="tags" contents="hardware" />
<meta property="og:locale" content="en">
<meta property="og:site_name" content="Andrew Ridgway's Blog">
<meta property="og:type" content="article">
<meta property="article:author" content="">
<meta property="og:url" content="http://localhost:8000/proxmox-cluster-1.html">
<meta property="og:title" content="Building a 5 node Proxmox cluster!">
<meta property="og:description" content="">
<meta property="og:image" content="http://localhost:8000/">
<meta property="article:published_time" content="2024-07-24 20:00:00+10:00">
</head>
<body>
<!-- Navigation -->
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header page-scroll">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container -->
</nav>
<!-- Page Header -->
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<div class="post-heading">
<h1>Building a 5 node Proxmox cluster!</h1>
<span class="meta">Posted by
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
on Wed 24 July 2024
</span>
</div>
</div>
</div>
</div>
</header>
<!-- Main Content -->
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<!-- Post Content -->
<article>
<h4>A quick summary of this post by AI</h4>
<p>I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) </p>
<p><strong>Summary:</strong></p>
<p>Quick look at some of the things I've used Proxmox fr</p>
<ul>
<li>I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.</li>
<li>I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.</li>
<li>My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.</li>
</ul>
<p>As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future</p>
<p><strong>Follow-up Questions:</strong></p>
<ol>
<li><strong>Kubernetes Cluster:</strong></li>
<li>What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?</li>
<li>
<p>How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?</p>
</li>
<li>
<p><strong>CI/CD with Gitea:</strong></p>
</li>
<li>Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?</li>
<li>
<p>What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?</p>
</li>
<li>
<p><strong>Monitoring and Logging:</strong></p>
</li>
<li>How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?</li>
<li>
<p>How do you ensure the security and privacy of your data while utilizing these tools?</p>
</li>
<li>
<p><strong>Future Plans:</strong></p>
</li>
<li>You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?</li>
<li>Are there any other new technologies or projects you're considering for your homelab in the near future?</li>
</ol>
<h2>A Picture is worth a thousand words</h2>
<p><img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"></p>
<p><em>Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on</em></p>
<p><img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"></p>
<h2>The idea</h2>
<p>For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!</p>
<p>I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.</p>
<p>All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)</p>
<h3>The platform for the Idea!</h3>
<p>After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.</p>
<p>First lets define what on earth Proxmox is</p>
<h4>Proxmox</h4>
<p>Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:</p>
<ol>
<li><strong>Simultaneous Management of LXC Containers and VMs:</strong>
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.</li>
</ol>
<p>For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.</p>
<ol>
<li>
<p><strong>Efficient Resource Allocation:</strong>
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.</p>
</li>
<li>
<p><strong>Live Migration:</strong>
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.</p>
</li>
<li>
<p><strong>High Availability:</strong>
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.</p>
</li>
<li>
<p><strong>Open-Source and Free:</strong>
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.</p>
</li>
</ol>
<p>Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.</p>
<p><strong>Relevant Links:</strong></p>
<ul>
<li>
<p>Official Proxmox VE website: <a href="https://www.proxmox.com/">https://www.proxmox.com/</a></p>
</li>
<li>
<p>Proxmox VE documentation: <a href="https://pve-proxmox-community.org/">https://pve-proxmox-community.org/</a></p>
</li>
<li>
<p>Proxmox VE forums: <a href="https://forum.proxmox.com/">https://forum.proxmox.com/</a></p>
</li>
</ul>
<p>I'd like to thank the mistral-nemo LLM for writing that ;) </p>
<h3>LXC's</h3>
<p>To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.</p>
<p><strong>Isolation Level</strong>:</p>
<ul>
<li>
<p>LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.</p>
</li>
<li>
<p>Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.</p>
</li>
</ul>
<p><strong>System Call Filtering</strong>:</p>
<ul>
<li>
<p>LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.</p>
</li>
<li>
<p>Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.</p>
</li>
</ul>
<p><strong>Resource Management</strong></p>
<ul>
<li>
<p>LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.</p>
</li>
<li>
<p>Docker enforces strict resource limits on every container by default.</p>
</li>
</ul>
<p><strong>Networking</strong>:</p>
<ul>
<li>
<p>In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like <code>ip</code> or <code>bridge-utils</code>.</p>
</li>
<li>
<p>Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.</p>
</li>
</ul>
<p>What LXC is Focused On:</p>
<p>Given these differences, here's what LXC primarily focuses on:</p>
<ol>
<li>
<p><strong>Simplicity and Lightweightness</strong>: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.</p>
</li>
<li>
<p><strong>Control and Flexibility</strong>: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.</p>
</li>
<li>
<p><strong>Integration with Traditional Linux Tools</strong>: Since LXC uses standard Linux tools for networking (like <code>ip</code> and <code>bridge-utils</code>) and does not add its own layer, it integrates well with traditional Linux systems administration practices.</p>
</li>
<li>
<p><strong>Use Cases Where Fine-grained Control is Required</strong>: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.</p>
</li>
</ol>
<p>So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.</p>
<p>Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!</p>
<p>The LXC's have also been super easy to set up with the help of ttecks helper scripts <a href="https://community-scripts.github.io/Proxmox/">Proxmox Helper Scripts</a> It was very sad to hear he had gotten <a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/">sick</a> and I realy hope he gets well soon!</p>
<h3>VM's</h3>
<p>Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!</p>
<p>Again I'd like to thank mistral-nemo for that very informative piece of prose ;) </p>
<p>The big question here is what do I use the VM capablity of Proxmox for?</p>
<p>I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)</p>
<p>I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system</p>
<p>Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. </p>
<p>Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)</p>
</article>
<hr>
</div>
</div>
</div>
<hr>
<!-- Footer -->
<footer>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<p>
<script type="text/javascript" src="https://sessionize.com/api/speaker/sessions/83c5d14a-bd19-46b4-8335-0ac8358ac46d/0x0x91929ax">
</script>
</p>
<ul class="list-inline text-center">
<li>
<a href="https://twitter.com/ar17787">
<span class="fa-stack fa-lg">
<i class="fa fa-circle fa-stack-2x"></i>
<i class="fa fa-twitter fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
<li>
<a href="https://facebook.com/ar17787">
<span class="fa-stack fa-lg">
<i class="fa fa-circle fa-stack-2x"></i>
<i class="fa fa-facebook fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
<li>
<a href="https://github.com/armistace">
<span class="fa-stack fa-lg">
<i class="fa fa-circle fa-stack-2x"></i>
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
</ul>
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
which takes great advantage of <a href="http://python.org">Python</a>.</p>
</div>
</div>
</div>
</footer>
<!-- jQuery -->
<script src="http://localhost:8000/theme/js/jquery.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
<!-- Custom Theme JavaScript -->
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
</body>
</html>

View File

View File

View File

View File

@ -87,8 +87,11 @@
<li><a href="http://localhost:8000/tag/data-engineering.html">data engineering</a> (3)</li>
<li><a href="http://localhost:8000/tag/duckdb.html">DuckDB</a> (1)</li>
<li><a href="http://localhost:8000/tag/embedded.html">embedded</a> (1)</li>
<li><a href="http://localhost:8000/tag/hardware.html">hardware</a> (1)</li>
<li><a href="http://localhost:8000/tag/kubernetes.html">kubernetes</a> (1)</li>
<li><a href="http://localhost:8000/tag/managed-services.html">Managed Services</a> (1)</li>
<li><a href="http://localhost:8000/tag/metabase.html">Metabase</a> (1)</li>
<li><a href="http://localhost:8000/tag/proxmox.html">proxmox</a> (1)</li>
<li><a href="http://localhost:8000/tag/resume.html">Resume</a> (2)</li>
</div>
</div>